<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
<title>Graduate Theses</title>
<link href="https://hdl.handle.net/1721.1/131023" rel="alternate"/>
<subtitle/>
<id>https://hdl.handle.net/1721.1/131023</id>
<updated>2026-04-16T18:19:18Z</updated>
<dc:date>2026-04-16T18:19:18Z</dc:date>
<entry>
<title>Demonstration of heat pipe self-deployment</title>
<link href="https://hdl.handle.net/1721.1/165421" rel="alternate"/>
<author>
<name>Woloshun, Keith Albert.</name>
</author>
<id>https://hdl.handle.net/1721.1/165421</id>
<updated>2026-04-14T03:04:34Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Demonstration of heat pipe self-deployment
Woloshun, Keith Albert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1986; Bibliography: leaves 109-112.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost-effectiveness analysis of urban transit alternatives.</title>
<link href="https://hdl.handle.net/1721.1/165414" rel="alternate"/>
<author>
<name>Rouleau, Eloi Marie.</name>
</author>
<id>https://hdl.handle.net/1721.1/165414</id>
<updated>2026-04-14T03:04:37Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Cost-effectiveness analysis of urban transit alternatives.
Rouleau, Eloi Marie.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1977; Bibliography: leaves 84-87.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model based specification for developing safety-critical system-software</title>
<link href="https://hdl.handle.net/1721.1/165411" rel="alternate"/>
<author>
<name>Karasi, Anand K.
            (Anand Kumar),
            1975-</name>
</author>
<id>https://hdl.handle.net/1721.1/165411</id>
<updated>2026-04-14T03:04:28Z</updated>
<published>2000-01-01T00:00:00Z</published>
<summary type="text">Model based specification for developing safety-critical system-software
Karasi, Anand K.
            (Anand Kumar),
            1975-
Thesis: S.M., Massachusetts Institute of Technology, Technology and Policy Program, 2000; Includes bibliographical references (leaves 62-63).
</summary>
<dc:date>2000-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hemorheological Considerations in the Development of Microfluidic Blood Oxygenation Devices</title>
<link href="https://hdl.handle.net/1721.1/165342" rel="alternate"/>
<author>
<name>Pincot, André M.</name>
</author>
<id>https://hdl.handle.net/1721.1/165342</id>
<updated>2026-04-07T03:05:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hemorheological Considerations in the Development of Microfluidic Blood Oxygenation Devices
Pincot, André M.
Novel supersaturation oxygenation technology promises a leap forward in the enhancement of ECMO capabilities and deployment of a more efficient, versatile, and portable form of extracorporeal oxygenation technology. The showcased membrane bubble generation supersaturation technique offers superior oxygenation performance to conventional ECMO allowing for reductions in blood flow rate, thus promising to reduce the shear-based thrombosis that limits current oxygenation technology in medium to long-term treatment of severely aff licted patients. The membrane supersaturation concept promises to address that gap in reliable, extended treatment by greatly reducing shear to delay and prevent thrombus formation in the device and associated extracorporeal life support (ECLS) circuit. The bubbles produced by the membrane generator are small in size, sufficient to completely diffuse and fully oxygenate a larger volume of blood when combined with an additional deoxygenated blood flow. Further, the technique’s high oxygen flux will offer new options for reducing size footprint and ruggedization for austere conditions given further investment and development. This will necessitate the creation of custom membrane solutions and further optimization of device channel geometries via simulation using advanced blood models such as the tensorial enhanced structural stress thixotropic-viscoelastic (t-ESSTV) constitutive model developed and discussed in this work. A characteristic feature of human blood rheology is a distinctive stress hysteresis during a ramp up in the shear rate from zero, followed by a ramp back to zero. This is a result of the fact that human blood has a longer characteristic time of shear-induced rouleaux breakdown compared to the shear aggregation of the rouleaux. We demonstrate this telltale phenomenon of human blood rheology using a triangle ramp protocol to control time-dependent changes in the shear rate. The unique hysteresis data are then used along with steady state data to fit parameters of a recently published thixo-elasto-viscoplastic rheological model, the tESSTV model. These best-fit parameter values from the hysteresis ramps are then used to predict step-up/down in shear rate, small amplitude oscillatory shear, uni-directional large amplitude oscillatory shear, and large amplitude oscillatory shear flow. Additionally, correlations between the calculated fitting parameters and physiological data are analyzed to inform the interpretation of model behavior in physical terms. The goodness of fit of the triangle ramp protocol and rheological hysteresis data are then evaluated alongside recently developed techniques to assess thixotropy via computation of hysteresis loop area. The results indicate the efficacy of the t-ESSTV model in potentially predicting the complex characteristics of blood rheology in useful ways for future use in modeling circulating flows under a variety of mechanical and biological loading conditions and predicting understanding rheological effects on resulting pathologies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conceptualizing Machine Learning for Dynamic Information Retrieval of Electronic Health Record Notes</title>
<link href="https://hdl.handle.net/1721.1/165340" rel="alternate"/>
<author>
<name>Jiang, Sharon</name>
</author>
<id>https://hdl.handle.net/1721.1/165340</id>
<updated>2026-04-07T03:05:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Conceptualizing Machine Learning for Dynamic Information Retrieval of Electronic Health Record Notes
Jiang, Sharon
The large amount of time clinicians spend sifting through patient notes and documenting in electronic health records (EHRs) is a leading cause of clinician burnout. By proactively and dynamically retrieving relevant notes during the documentation process, we can reduce the effort required to find relevant patient history. In this work, we conceptualize the use of EHR audit logs for machine learning as a source of supervision of note relevance in a specific clinical context, at a particular point in time. Our evaluation focuses on the dynamic retrieval in the emergency department, a high acuity setting with unique patterns of information retrieval and note writing. However, our framework is general and can be applied to other clinical settings and with other data modalities (e.g., labs, medications, imaging). We apply our framework to the oncology setting to demonstrate its utility to other clinical workflows. We show that our methods can achieve an AUC of 0.963 in the ED and 0.937 in oncology when predicting which notes will be read in an individual note writing session. We additionally conduct user studies with several clinicians and find that our framework can help clinicians retrieve relevant information more efficiently.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microneedles for Easier Fish Skin Penetration and Longer &#13;
Attachment</title>
<link href="https://hdl.handle.net/1721.1/165339" rel="alternate"/>
<author>
<name>Raad, Jad</name>
</author>
<id>https://hdl.handle.net/1721.1/165339</id>
<updated>2026-04-07T03:05:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Microneedles for Easier Fish Skin Penetration and Longer &#13;
Attachment
Raad, Jad
Aquaculture is the farming of aquatic animals for commercial purposes. This growing industry supplies around 50% of the world’s seafood and has reduced overfishing. However, it has also facilitated the spread of diseases between fish by growing them in close quarters, which results in poor growth and higher mortality levels. Injection vaccination is the most common way to combat this issue, but it is labor-intensive and stress-intensive on the fish. As an alternative to this method, the Marelli lab proposed using impermeable silk microneedle patches to encapsulate the medication and deliver it through diffusion to rainbow trout fry. When a microneedle patch was tested on a 7 g fry, it had difficulty penetrating the skin and only stayed attached for 10 min after injection. Consequently, it caused significant stress to the fish upon &#13;
insertion and fell short of the 4 hrs required for complete payload diffusion into the animal. This work aimed to reduce the force necessary for the needle to pierce fish skin and augment the &#13;
force needed to dislodge it, allowing for easier piercing and longer animal attachment time. Thus, the study intended to decrease the patch’s insertion force and increase its retraction force. The initial needles were cone-shaped and had an angle of 21º. To assess the effects of needle tip angle and overall shape on the forces, the new needles’ tip angle varied between 15º, 20º, and 25º, and a cylindrical base was added to them and varied between 0%, 33%, and 66% of the total needle height. The insertion and retraction forces of microneedle patches were quantified and revealed that sharper needles and needles with cylindrical bases amounting to 66 % of the total &#13;
needle height reduced the insertion force. In contrast, the retraction force was independent of both factors. The 25º 66%, 15º 33%, and 15º 0% needles displayed the lowest insertion forces and were tested on zebrafish to quantify how long they could stay attached. Preliminary tests on the live animals demonstrated that the new needles stayed attached to the fish for up to 8 hrs. This improved upon the initial Marelli lab design, which remained attached for 30 min at most. &#13;
Overall, pursuing live fish testing would allow for selecting the best-performing design and further developing it as a viable alternative to current vaccination methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Invertebrate-inspired Approach to Design and Manufacturing in Soft Robotics</title>
<link href="https://hdl.handle.net/1721.1/165338" rel="alternate"/>
<author>
<name>Arase, Cathleen</name>
</author>
<id>https://hdl.handle.net/1721.1/165338</id>
<updated>2026-04-07T03:05:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Invertebrate-inspired Approach to Design and Manufacturing in Soft Robotics
Arase, Cathleen
Soft robotics has many potential applications including deep sea biological sampling, fruit picking, physical therapy, assistive devices, surgery and other grasping tasks; however, within that realm many soft actuators lack the ability to output high force. In order to attempt to overcome this challenge, many soft roboticists are interested in variable stiffness actuators, but soft-rigid hybrid robots may also be helpful in solving this challenge. In fact, many invertebrates are able to undergo large deformations and have the ability to change their stiffness. Many of these invertebrates integrate components such as spicules or ossicles, which are small bones, making the invertebrates essentially a soft-rigid hybrid system. Taking inspiration from these invertebrates, soft rigid hybrid systems can be designed to increase the capabilities of soft actuators. Within the field of soft robotics, there are many practical problems to be overcome in the development of soft-rigid hybrid hybrid machines, including design, manufacturability, and delamination between soft and rigid components. This thesis focuses on work towards addressing these problems. The work explores invertebrates and invertebrate-inspired soft-rigid hybrid robots as a framework for understanding constraints in soft robotic systems. It then proceeds to explore manufacturing techniques for creating cast soft-rigid hybrid robots. Following this, it explores a novel method for decreasing the delamination forces between rigid overmolded components and soft walls of actuators, and finally it concludes with steps towards creating a soft actuator that incorporates those components as well as a comparison to a rigid example using a linkage mechanism for grasping.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a High-Throughput Cryoprotection Screening Platform for Cell Therapies</title>
<link href="https://hdl.handle.net/1721.1/165337" rel="alternate"/>
<author>
<name>Dey Barsukova, Anita</name>
</author>
<id>https://hdl.handle.net/1721.1/165337</id>
<updated>2026-04-07T03:05:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development of a High-Throughput Cryoprotection Screening Platform for Cell Therapies
Dey Barsukova, Anita
Type 1 Diabetes is a devastating disease in which the immune system attacks insulin-producing beta cells in the pancreas, disrupting the normal blood glucose regulation mechanism and resulting in damage to major organ systems. An emerging therapy for Type 1 includes transplanting stem cell-derived beta cell aggregates into patients, restoring normal regulation of blood glucose and eliminating the need for insulin injections. Reliable cryopreservation methods are required to meet global demand for these aggregates, but current protocols result in low cell viability post thaw and require complex post processing to remove the toxic cryopreservation agent (CPA) formulation before implantation. In this work, a high-throughput screening method is developed to identify a novel non-toxic CPA formulation that would enable the scale-up of this new Type 1 Diabetes treatment. The development and validation of workflow steps are presented, in addition to data from pilot experiments that execute all workflow steps.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applications of Light Reflectance Sensing in the Gastrointestinal Tract with Ingestible Devices for Disease Diagnosis</title>
<link href="https://hdl.handle.net/1721.1/165333" rel="alternate"/>
<author>
<name>Chen, Hao (Jack)</name>
</author>
<id>https://hdl.handle.net/1721.1/165333</id>
<updated>2026-04-07T03:05:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Applications of Light Reflectance Sensing in the Gastrointestinal Tract with Ingestible Devices for Disease Diagnosis
Chen, Hao (Jack)
Disease diagnosis in the gastrointestinal tract can be challenging, often requiring difficult endoscopic procedures or expensive imaging techniques. Pill-sized ingestible sensors represent an alternative method for disease diagnosis in the gastrointestinal tract that is minimally-invasive and cost-effective, thus promoting patient adherence and preventative screening of diseases. In this thesis, I investigate the design of ingestible sensors that emit light and measure light reflectance in the gastrointestinal tract for three applications: the detection of gastric mucosal contact, the diagnosis of upper gastrointestinal bleeding, and the diagnosis of small intestinal ischemia. To enable these applications, I develop arrays of LEDs and photodiodes that monitor the changes in reflectivity of the tissue and changes in color of the tissue. The sensor arrays are fabricated and assembled in ingestible form factors and validated in ex vivo and in vivo experiments with swine. The results demonstrate that the sensing of light reflectance enables accurate differentiation of gastric mucosa versus gastric lumen for the detection of mucosal contact, accurate detection of gastric bleeding even in the presence of red drinks or gastric fluid, and accurate detection of small intestinal ischemia even in the presence of bile and chyme. For the application to diagnose small intestinal ischemia, I present initial mechanical and electrical designs of an ingestible capsule system that activates in the small intestines via the dissolution of a pH-sensitive polymer, then performs duty cycling to enable ischemia detection during the entire small intestinal transit time. I aim to continue the development and validation of these ingestible sensors with the vision of providing minimally-invasive devices to enable cost-effective screening and monitoring of gastrointestinal diseases and conditions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for Detection and Observation of Radiation Chemistry Species on an MR-LINAC</title>
<link href="https://hdl.handle.net/1721.1/165331" rel="alternate"/>
<author>
<name>Warner, Noah Stanley</name>
</author>
<id>https://hdl.handle.net/1721.1/165331</id>
<updated>2026-04-07T03:05:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Framework for Detection and Observation of Radiation Chemistry Species on an MR-LINAC
Warner, Noah Stanley
Radiation therapy, used in over half of cancer treatments, aims to target tumors while preserving healthy tissue. Existing techniques lack the ability to measure tissue damage during therapy, causing potential over- or under-irradiation, leading to severe side effects. Radiation inflicts DNA damage via direct and indirect mechanisms, the extents of each are inconsistent between patients, causing differences in response to radiation. Magnetic resonance-linear accelerators (MR-linacs) are promising to evaluate indirect DNA damage by measuring radiation chemistry species (RCS) produced during irradiation. In this work, MRI methods were developed to observe free radical production, radiation chemistry was modeled for select RCS scavengers and verified experimentally. These methods were then employed to measure MRI signal changes for complex combinations of RCS scavengers and radiosensitizing nanoparticles. Radiation chemistry experimental T1 changes were used to fit the relaxivity of the superoxide free radical and this value was assumed for all subsequent calculations. MRI T1 changes due to free radical production by radiation are presented in solutions consisting of water, 10 mM coumarin, 20 μM mito-TEMPO, 5 mM glutathione, a 20 μM mito-TEMPO and 5 mM glutathione mixture, 10 μM gold nanoparticles and 60 μM phosphate buffered saline. Radiation chemistry simulations completed for water and 10 mM coumarin show good agreement with their respective experimental T1 changes. Largest T1 changes and largest rates of production of superoxide were found in the 20 μM mito-TEMPO and 5 mM glutathione mixture, while smallest T1 changes and smallest rates of production of superoxide were found in the 20 μM mito-TEMPO solution. The main conclusions of this work show that a framework to detect T1 changes due to the production of free radical species during imaging and irradiation on a MR-linac has been developed, with the predominant source of T1 change over time due to free radicals attributed to the production of superoxide.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-cell dissection of mature conventional dendritic cells in the tumor microenvironment in metastatic melanoma</title>
<link href="https://hdl.handle.net/1721.1/165328" rel="alternate"/>
<author>
<name>Wang, Cassia B.</name>
</author>
<id>https://hdl.handle.net/1721.1/165328</id>
<updated>2026-04-07T03:05:40Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Single-cell dissection of mature conventional dendritic cells in the tumor microenvironment in metastatic melanoma
Wang, Cassia B.
Although immunotherapy has revolutionized cancer treatment, the response rate of metastatic melanoma to immune checkpoint inhibitors (ICI) remains at less than 50%. One of the determinants of response might be explained by the underlying molecular mechanisms in the tumor microenvironment (TME), which is the composition of tumor cells and its surrounding environment of other cell types which play various roles in facilitating or inhibiting the progression of cancer. It was in our interest to specifically investigate the immunological factors driving observed clinical outcomes. Using single-cell technologies, mature conventional dendritic cells (mDCs) were identified in a cohort of metastatic melanoma samples and were present at a higher proportion in a subset of ICI anti-PD1-treated patients with better progression free survival (PFS). Elaborating on this finding, we generalized the characterization of mDCs in metastatic melanoma by using methods to determine mDCs’ association with other subtypes found in the TME, reveal the molecular features of mDCs compared to other conventional dendritic cells (cDCs), and find differentiating factors among samples with different mDC proportions. Through computational analysis of single-cell transcriptomes and epigenomes in metastatic melanoma, we aim to uncover critical immunological features and interactions within the TME, with potential for enhancing melanoma outcomes.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revisiting MHD Generators with HTS Magnets</title>
<link href="https://hdl.handle.net/1721.1/165327" rel="alternate"/>
<author>
<name>Clingerman, Matthew Hikaru</name>
</author>
<id>https://hdl.handle.net/1721.1/165327</id>
<updated>2026-04-07T03:05:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Revisiting MHD Generators with HTS Magnets
Clingerman, Matthew Hikaru
Magneto-hydrodynamic (MHD) power generators can convert thermal and kinetic energy to electrical energy without any moving mechanical parts. They have the promise of competing against typical turbo-generators in a power plant. The advent of high temperature superconducting (HTS) magnets can give MHD generators the edge over other generators as the efficiency increases with the magnetic field strength. A robust mathematical model is derived to account for the plasma physics, fluid dynamics, and magneto-hydrodynamics involved with directing and harnessing the flow of an ionized gas. The resulting analytical model is computationally solved and then analyzed. &#13;
&#13;
It is clear that HTS magnets greatly benefit MHD generators. For a coal-fired power plant, the enthalpy ratios between the input and output of the generator surpass 50%. In other words, over half of the thermal energy produced by the power plant is converted to electricity by the MHD generator. The remaining fraction of energy is directed to a bottoming cycle for additional energy conversion. In the end, modest estimates put the overall efficiency of this system over 65%, compared to the current most advanced coal power plants of less than 45% efficiency.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oxide coarsening and agglomeration during melt-based additive manufacturing of dispersion-strengthened alloys</title>
<link href="https://hdl.handle.net/1721.1/165321" rel="alternate"/>
<author>
<name>Hou, Wenyuan (Roger)</name>
</author>
<id>https://hdl.handle.net/1721.1/165321</id>
<updated>2026-04-07T03:05:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Oxide coarsening and agglomeration during melt-based additive manufacturing of dispersion-strengthened alloys
Hou, Wenyuan (Roger)
Dispersion-strengthened alloys densified with laser powder bed fusion, a melt-based additive manufacturing technique, have coarser dispersoids, lower dispersoid number densities, and greater tendency to form slag compared to conventional wrought dispersion-strengthened alloys. These differences degrade creep and fatigue resistance, and mitigating their extent is critical to printing high-performance components for demanding high-temperature structural applications. In this work, experiments and modeling were used to assess how printing parameters, alloy chemistry, and powder feedstock collectively affect dispersoid evolution and slag formation. Laser powder bed fusion parameter studies were used to assess the effects in Ni-20Cr-Y₂O₃ feedstock produced via resonant acoustic mixing then consolidated with systematic variations in laser parameters (power, speed), Y₂O₃ concentration, and Al content. Dispersoid structure was subsequently characterized using small angle neutron scattering. The finest dispersion achieved among fully dense (&gt;99.5 rel. density) specimens has mean dispersoid diameter 21 nm and number density 230 μm-3. Dispersoid diameter was shown to decrease with the following adjustments: decreasing laser power, increasing scan speed, decreasing Y₂O₃ concentration, and keeping Al content below 0.3 wt%. Model predictions for dispersoid diameter were consistent with experimental values, and several key factors which influence the evolution of dispersoids were identified: convection-influenced thermal excursion, Y₂O₃ solubility, reaction with Al, nucleation, and diffusion-driven growth. The model also considers oxide dissolution over multiple melt cycles to establish bounds for slag-free printing of ODS alloys, showing a tradeoff between build rate and the quality of the oxide feedstock.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of lesion preparation-induced calcium fractures in vascular intervention&#13;
for atherosclerotic disease: in silico assessment</title>
<link href="https://hdl.handle.net/1721.1/165318" rel="alternate"/>
<author>
<name>Sogbadji, Jonas</name>
</author>
<id>https://hdl.handle.net/1721.1/165318</id>
<updated>2026-04-07T03:05:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Impact of lesion preparation-induced calcium fractures in vascular intervention&#13;
for atherosclerotic disease: in silico assessment
Sogbadji, Jonas
Atherosclerosis is the most common form of obstructive vascular disease and is the predominant cause of mortality world-wide. Endovascular interventions like balloon angioplasty and stent implantation have dominated as therapies with tremendous impact and yet are least effective in most severe disease – especially those with heavily calcified lesions.&#13;
&#13;
Intravascular lithotripsy (IVL) has been proposed to “prepare” lesions and optimize endovascular intervention with the idea of removing and/or modifying lesions resistive stiffness so as to make balloon or stent placement more effective. Despite clinical enthusiasm, there remains a lack of understanding as to how this occurs, and which lesions would be most amenable to and most affected by IVL.&#13;
&#13;
The range and extent of lesions are substantial presenting a formidable challenge in managing their modification. This complexity hampers the extrapolation of findings from both clinical and preclinical models. In silico models offer a means by which to examine diverse lesion morphologies and a range of lesion modifications to address these deficiencies, and in particular to understand if there is a correlation between calcium morphology alteration and improvement of stenting outcomes. We build a computational platform to connect stenting outcomes to IVL induced calcium modification. Three models were inspired by clinical optical coherence tomography image analyses and a stenting procedure was simulated for a number of variations within each model. Results show that expansion of stents and treated arteries rose with the volume of tissue affected and excised. For one particular model, stent expansion reached a local maximum. 3 In silico models provide a valuable perspective for considering complex vascular interventions – not only in simulating effects that are challenging to recapitulate in preclinical models but in helping develop a tool that can predict susceptible candidate lesions and help determine the ideal extent of lesion modification to optimize overall effect.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Certifiable Cooperative Localization for Underwater Navigation</title>
<link href="https://hdl.handle.net/1721.1/165275" rel="alternate"/>
<author>
<name>Morrison, John P.</name>
</author>
<id>https://hdl.handle.net/1721.1/165275</id>
<updated>2026-03-28T03:03:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Certifiable Cooperative Localization for Underwater Navigation
Morrison, John P.
Accurate underwater positioning remains one of the most significant obstacles to autonomous underwater vehicle (AUV) operations. Satellite-based navigation signals are unavailable underwater, so AUVs must dead-reckon using inertial sensors, coupled with velocity or heading references. Due to random noise and variable biases in inertial sensor measurements, the AUV’s position uncertainty grows steadily over the course of the mission, but can be reduced through range measurements to fixed or mobile references. The associated range-aided simultaneous localization and mapping (SLAM) problem is particularly challenging to solve with existing optimization methods. Individual range measurements provide limited geometric constrains on vehicle position and are subject to non-linear errors due to multi-path propagation. Attempts to optimize typical range-aided SLAM cost functions often return solutions which represent local, rather than global minima, resulting in unpredictable vehicle behavior when used for closed-loop navigation. This thesis applies a recently developed certifiable optimization algorithm, Certifiably Correct Range-aided SLAM (CORA), to the problem of cooperative localization between AUVs. CORA leverages aspects of the range-aided SLAM problem structure to find solutions which can be certified to be globally optimal. This method is integrated into a novel cooperative localization scheme, in which each vehicle maintains a locally held, periodically updated copy of the centralized, multi-agent factor graph. The cooperative localization framework presented here leverages acoustic modems for both range measurement and the sharing of sub-graphs through inter-vehicle communication. This approach was validated through extensive field trials using two modular, low-cost Spurdog AUVs were equipped with WHOI Micromodem2 payloads. Results from single and multi-vehicle deployments demonstrated that CORA substantially outperforms existing solvers when faced with poor landmark initialization and reduced observability as a result of real-world communication failures. The results presented here demonstrate the added value of coupling certifiable estimation with cooperative localization for multi-AUV localization problems, particularly in challenging, GPS-denied environments.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing and Mitigating Small-Diameter Tool Wear in Nickel-Based Superalloy Machining</title>
<link href="https://hdl.handle.net/1721.1/165185" rel="alternate"/>
<author>
<name>Brush, Alexander Sparry</name>
</author>
<id>https://hdl.handle.net/1721.1/165185</id>
<updated>2026-03-17T03:06:39Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterizing and Mitigating Small-Diameter Tool Wear in Nickel-Based Superalloy Machining
Brush, Alexander Sparry
This thesis investigates tool failure in the micromachining of single-crystalline René N4 turbine blades coated with ceramic thermal barrier layers. The work done in this thesis was promoted through a partnership between Massachusetts Institute of Technology (MIT) and GE Vernova. This thesis is complemented by the thesis written by Luke Placzek. Together these works offer a comprehensive case study in process analysis and manufacturing optimization.&#13;
This thesis begins with groundwork to document tool failure mechanisms and frequencies through photographic analysis. This was done alongside a study of historical data to analyze tool breakage frequency in the context of the turbine blade. Based on these insights, an Analysis of Variance (ANOVA) test followed by a Tukey’s Honestly Significant Difference (HSD) test identified statistically significant differences in tool breakage rates across machines and rows. A detailed study of tool wear progression was conducted to better understand how small-diameter endmills wear when machining the nickel-based superalloy René N4. Utilizing all these findings, an updated tool path was created to optimize tool life &#13;
This work lays the foundation for an improved machining strategy to reduce tool breakage in manufacturing turbine blades. Estimations show that the refined CAM strategies may reduce tool breakage by roughly 33 percent. Preliminary models estimate the implementation of the suggested improvements will save GE Vernova 2.5 million dollars per year.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A cross-industry analysis using Q-Methodology for streamlining engineering workflow</title>
<link href="https://hdl.handle.net/1721.1/165181" rel="alternate"/>
<author>
<name>Gupta, Harshit</name>
</author>
<id>https://hdl.handle.net/1721.1/165181</id>
<updated>2026-03-17T03:06:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A cross-industry analysis using Q-Methodology for streamlining engineering workflow
Gupta, Harshit
Non-value added times arising from disconnected systems, legacy architecture, repeated iterations, product version mismatch, and manual processes remains one of the most persistent inefficiencies in modern design and manufacturing organizations which can be resolved by leveraging digital technology. Through this thesis, a framework has been laid out to understand and summarize the gaps among the various departments of an organization from the standpoint of information flow across the complete manufacturing workflow. The goal is to find gaps and pain points in the adoption of the ’Digital Thread’ with the objective of becoming software-driven enterprises. The objective is to identify opportunities to automate and optimize processes, and how information can be streamlined across departments. As a snapshot, the project investigates how digital transformation can bridge the gap between design and manufacturing, with a focus on concurrent engineering in high-mix, low-volume production, and high-volume, low-mix production environments. The research uses Q-methodology to understand how the perception of use of digital tools vary across industries and organizations, especially among vertically integrated and supplier dependent enterprises. Evaluation is done across different roles in an organization, ranging from executives and strategy teams to engineers, metrology specialists, and shop floor managers perceive current workflows, bottlenecks, and opportunities for improvement. The analysis reveals differences and similarities in interests and opinions to map the landscape of the current and growing needs across different industries and product portfolio. The results of the thesis can be used by participating teams to re-design workflow, communication and process plans and add flexibility through automation to the existing process. The thesis conclusion will also help PTC to understand the capabilities that their softwares are missing out on that can be integrated in their future iterations to help serve their customers better for faster and better product development. The shift towards software-driven manufacturing is the need of the hour with increasing stress on re-industrialization and the Thesis contributes to the current evolving discussion. The Thesis ends with a discussion on potential avenues for exploration gathered from participants through qualitative interviews that can be used as a roadmap to get a sense of future directions of the dynamic industry.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>User-Responsive Solutions for Cognitive Load Reduction in CAD Platforms</title>
<link href="https://hdl.handle.net/1721.1/165180" rel="alternate"/>
<author>
<name>Bai, Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/165180</id>
<updated>2026-03-17T03:06:37Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">User-Responsive Solutions for Cognitive Load Reduction in CAD Platforms
Bai, Jane
Amid the evolution of cloud-based Computer-Aided Design (CAD) platforms, traditional educational approaches fail to address the diversity of cognitive obstacles that users across expertise levels and learning behaviors fac. This thesis investigates whether behavior-adaptive CAD tools can reduce friction as hypothesized by Cognitive Load Theory (CLT) while enhancing skill development in modern engineering environments.A two-phase mixed-methods approach was employed that combined large-scale behavioral persona identification with controlled user testing. TF-IDF and PCA on the results of an MIT-wide survey identified four distinct behavioral archetypes corresponding to unique tool usage patterns and learning preferences independent of technical proficiency. A/B testing of three behavior-adaptive custom tools which addressed workflow optimization, parametric knowledge retention, and contextual-aware passive modelling guidance was done with novice and advanced users. Command logging captured behavioral features and analysis discovered significant cognitive load reduction, improved workflow efficiency, and higher-retained skill development. NLP of post-session survey responses revealed deeper conceptual engagement. From these results, a three-stage model progressing from friction reduction through behavioral analytics to continuous personalization optimization was developed to inform business applications. The findings demonstrate that effective CAD education requires addressing individual behavioral patterns rather than traditionally uniform skill-based approaches. Behavior-adaptive tools enhance learning pathways and workflows by preserving user agency over creative and parametric decisions during modelling while reducing cognitive friction.  &#13;
&#13;
Keywords: Computer-Aided Design (CAD), Cognitive Load Theory (CLT), Behavioral Analytics, Behavior-Adaptive Learning, Engineering Education
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smart Manufacturing of Desktop Fiber Extrusion Devices (FrED): Design Optimization and Digital Factory Implementation</title>
<link href="https://hdl.handle.net/1721.1/165178" rel="alternate"/>
<author>
<name>Ng, Yong</name>
</author>
<id>https://hdl.handle.net/1721.1/165178</id>
<updated>2026-03-17T03:06:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Smart Manufacturing of Desktop Fiber Extrusion Devices (FrED): Design Optimization and Digital Factory Implementation
Ng, Yong
This thesis presents the design and implementation of FrED Factory, a lab-scale, digitally integrated smart manufacturing environment developed to support scalable production and experiential learning in advanced manufacturing. Built around the Fiber Extrusion Device (FrED), a desktop analog to an industrial optical fiber draw tower. The project addresses both physical manufacturability and digital system coordination, aiming to simulate real-world Industry 4.0 practices in an educational setting.&#13;
To ensure repeatable and efficient production, key design components were optimized through tolerance analysis of laser cut acrylic frames. Standard Operating Procedures (SOPs) were developed to guide consistent execution of processes including 3D printing, laser cutting, procurement, and assembly. A structured Bill of Materials (BOM) was implemented to manage subassemblies and support real-time inventory tracking. On the digital front, the FrED Factory leverages Tulip, a no code Manufacturing Execution System (MES), to deploy dynamic work instructions, manage work orders, and monitor shopfloor performance. Tulip’s EdgeMC hardware was used to integrate Internet of Things (IoT) devices for machine status tracking. MQTT protocols were applied to capture 3D printer activity via OctoPrint, and current sensors were deployed to automatically log Quality Control (QC) station usage.&#13;
The result is a modular, scalable, and data-rich smart factory environment that enables students to gain hands-on experience with modern manufacturing systems. For educators, the FrED Factory provides a tangible platform for teaching digital manufacturing, while industry professionals can view it as a blueprint for applying lean, connected workflows in small-scale, high-mix production environments.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Materials for Ion Transport Management in Anion Exchange Membrane Electrolyzers</title>
<link href="https://hdl.handle.net/1721.1/165177" rel="alternate"/>
<author>
<name>Aamer, Zara</name>
</author>
<id>https://hdl.handle.net/1721.1/165177</id>
<updated>2026-03-17T03:06:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Integrated Materials for Ion Transport Management in Anion Exchange Membrane Electrolyzers
Aamer, Zara
Electrochemical CO₂ separation systems leveraging anion exchange membranes (AEMs) offer significant energetic advantages over traditional bipolar membrane electrodialysis (BPMED), but suffer from hydroxide crossover, which reduces current efficiency (CE) and system performance. This work explores the transport dynamics of carbonate and hydroxide ions in AEM systems and introduces a hybrid PES-AEM bilayer membrane architecture to mitigate hydroxide crossover while preserving sufficient CO₂ recovery. We demonstrate that the bilayer system achieves a reduced relative transport factor (R = 1.4) and enables up to 3.8x improvement in CE compared to conventional AEM systems at realistic capture conditions. Further analysis reveals that transport properties in the least conductive domain of a multi-membrane system dominate overall behavior, allowing non-selective, low-conductivity materials such as porous PES to reduce hydroxide crossover effects. This study outlines key membrane material parameters influencing relative ionic transport and highlights the potential of hybrid architectures to unlock energy-efficient CO₂ electrochemical regeneration for direct air capture (DAC) integration.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dimensioning Defects with Monocular Vision in Automated Optical Inspection</title>
<link href="https://hdl.handle.net/1721.1/165176" rel="alternate"/>
<author>
<name>Boyd, Logan</name>
</author>
<id>https://hdl.handle.net/1721.1/165176</id>
<updated>2026-03-17T03:06:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Dimensioning Defects with Monocular Vision in Automated Optical Inspection
Boyd, Logan
Automated optical inspection (AOI) systems are common tools for quality control in industrial manufacturing. AOI systems use robotic systems to load components, take images, and detect defects, often also characterizing the defects by size or class. Among various approaches to this machine vision, monocular systems are popular because they are cheap and simple to integrate while offering intuitive visualization. However, monocular vision alone lacks depth resolution and struggles to accurately dimension defects on 3D surfaces, especially if the imaged component’s pose is ambiguous. This paper presents a transparent, open-sourced, end-to-end image processing pipeline for dimensioning surface defects on industrial components using RGB images. The pipeline estimates component pose through a 2D-3D correspondence, segments defects with machine learning or image comparison techniques, then projects the component’s CAD mesh into the image to calculate the lengths of segmented defect instances. The pipeline was developed on a 3D-printed test object and demonstrated with each of three segmentation methods, yielding defect dimensions with average error between 0.6-1.2mm.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Manufacturing Readiness Reviews and Fiber Extrusion Processes: A Two-Sided Approach to Product Maturity in Optics and Sensing</title>
<link href="https://hdl.handle.net/1721.1/165175" rel="alternate"/>
<author>
<name>Groll, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/165175</id>
<updated>2026-03-17T03:06:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Advancing Manufacturing Readiness Reviews and Fiber Extrusion Processes: A Two-Sided Approach to Product Maturity in Optics and Sensing
Groll, Matthew
This thesis engages with two important facets of the manufacturing discipline. The first half reflects on the ongoing efforts of the Charles Stark Draper Laboratory towards growing capabilities in production with an emphasis on standardization for responsible organizational scaling. Specific work is presented towards advancing the Manufacturing Readiness Review (MRR) process, informed by staff interviews, in the form of recommended approaches and template materials for technology leads to employ during future manufacturing review cycles. The second half covers more hands-on, active product and process development work for MIT’s Fiber Extrusion Device (FrED). Findings relate towards both improving the production process of the device as well as capabilities and observations of the extruded fiber. Inventory management recommendations are detailed for different production scenarios, and successful extrusion of acrylic, novel to the current studied capabilities of the FrED, is demonstrated. Observations on the resulting fiber’s optical properties are characterized along with a repeatable approach for doing so. While distinct, together these topics provide holistic insights into moving from concept to production.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaled Deployments of Seismic Penetrators to Measure&#13;
Stability of Antarctic Ice Shelves</title>
<link href="https://hdl.handle.net/1721.1/165174" rel="alternate"/>
<author>
<name>Steen, Parker</name>
</author>
<id>https://hdl.handle.net/1721.1/165174</id>
<updated>2026-03-17T03:06:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Scaled Deployments of Seismic Penetrators to Measure&#13;
Stability of Antarctic Ice Shelves
Steen, Parker
Ice shelves play a critical role in regulating the flow of Antarctic ice sheets and thereby global sea level rise. Recent ice shelf collapses are poorly understood due to a lack of seismic measurements of an ice shelf’s response to extreme environmental forces, such as ocean tides and tsunamis. Instrumenting ice shelves is a challenge due to transportation limitations, unpredictable weather, and dangerous crevassing. Air-dropped seismic penetrators have been developed in the Seismogeodetic Ice Penetrator (SGIP) project to alleviate manual installation pain points and access remote locations. The design of two SGIPs dropped into the Ross Ice Shelf in 2025 is reconsidered to determine how the design must and could evolve to be able to deploy seismic sensors at a scale necessary to achieve science goals. The power budget for a remotely dropped penetrator that transmits all recorded data is determined. Power architectures with solar panels or a wind turbine are optimized to minimize the total height of a penetrator powered by primary batteries by 23% with Iridium and 29% with Starlink. A Barrowman aerodynamic model is evaluated against empirical results. The model is calibrated and used to consider penetrator drops from fixed-wing aircraft, with results suggesting that horizontal belly drops are optimal but that vertical aft or side drops are possible. A unit cost curve is found for scaled production volumes. Finally, scaled deployments with LC-130H and Basler aircraft are considered to optimize the aircraft cost of seismic data, finding both aircraft to be viable, but the LC-130H more cost effective.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reachability Prediction and Optimal Path Planning for Autonomous Ocean Vehicles</title>
<link href="https://hdl.handle.net/1721.1/165173" rel="alternate"/>
<author>
<name>Mule, Ellen M.</name>
</author>
<id>https://hdl.handle.net/1721.1/165173</id>
<updated>2026-03-17T03:06:17Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reachability Prediction and Optimal Path Planning for Autonomous Ocean Vehicles
Mule, Ellen M.
For intelligent ocean exploration and sustainable ocean utilization, the need for smart autonomous underwater vehicles (AUVs), surface craft, and small aircraft is rapidly increasing. Creating time-optimal navigation routes for these vehicles has wide-ranging applications, including ocean data collection, transportation and distribution of goods, naval operations, search and rescue, detecting marine pollution, ocean cleanup, conservation, and solar-wind-wave energy harvesting. In this thesis, we employ the Massachusetts Institute of Technology – Multidisciplinary Simulation, Estimation, and Assimilation Systems (MIT-MSEAS) time-optimal and hazard-time-optimal path planning theory and schemes based on exact Hamilton–Jacobi partial differential equations (PDEs) and Level Set methods. We apply this methodology to ocean gliders and floats during several real-time sea experiments—the Mini-Adaptive Sampling Test Run (MASTR) and Grand Adaptive Sampling Experiment (GRASE) in the Gulf of Mexico, and the New England Seamounts Acoustic (NESMA) experiment in the North Atlantic. Using the MIT-MSEAS multi-resolution ocean modeling and data assimilation system to provide deterministic and probabilistic ocean current forecasts, we compute time-reachable sets as well as time-optimal paths for a variety of ocean vehicle missions. The governing differential equations for reachability analysis and time-optimal path planning were numerically integrated in real time, forced by our large-ensemble ocean forecasts. We illustrated deterministic and probabilistic forward reachability analyses, glider recovery planning, time-optimal routing for gliders in distress, and planning of future glider and float deployments. Results show that the actual paths of gliders were contained within our reachable set forecasts and in accord with the dynamic reachability fronts. These forecasts were successfully employed for glider recovery and informed strategic decisions for future missions. Additionally, we demonstrated the ability to incorporate risk such as severe weather or vessel traffic into hazard-time-optimal path planning for simulated collaborative air-sea drone missions. Overall, the integration of data-driven multi-resolution ocean modeling with exact reachability theory and numerical schemes enables principled, operationally relevant path planning for diverse ocean missions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytical Model for Orbital Motion Under J₂</title>
<link href="https://hdl.handle.net/1721.1/165172" rel="alternate"/>
<author>
<name>Nedungadi Martinod, Marco Antonio</name>
</author>
<id>https://hdl.handle.net/1721.1/165172</id>
<updated>2026-03-17T03:06:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Analytical Model for Orbital Motion Under J₂
Nedungadi Martinod, Marco Antonio
As the number of operational satellites and debris objects in Earth orbit continues to accelerate, the ability to predict orbital trajectories with both accuracy and efficiency has become an indispensable capability. Numerical integration of the full Cartesian equations of motion offers generality but at high computational cost, while traditional analytical theories are efficient but often restricted by singularities in the classical orbital element set. Analytical formulations expressed in nonsingular elements can combine efficiency with global validity, and provide physical insight into the structure of orbital perturbations.&#13;
&#13;
This thesis develops a globally valid analytical model for orbital motion under the Earth's second zonal harmonic (J₂) in the modified equinoctial element (MEE) framework. The MEE set eliminates the singularities present in circular and equatorial orbits, allowing uniform treatment across all regimes. Two principal contributions are made. First, explicit first-order mean equations of motion are derived using a generalized averaging method applied to the J₂ disturbing function. The resulting system reduces to two planar rotations of the eccentricity and inclination vectors with constant rates, together with a secular drift in the true longitude. These equations reproduce Brouwer's classical secular results when mapped back to Keplerian elements, while retaining the nonsingular advantages of the MEE formulation. Second, closed-form mean--osculating transformations are obtained, enabling consistent recovery of short-period variations from the mean solution. These transformations allow a dual representation: efficient mean propagation combined with reconstruction of instantaneous orbital states.&#13;
&#13;
The analytical model is validated against high-fidelity Cartesian propagation across a set of representative orbit classes, including LEO, GEO, GTO, and Molniya orbits. In all cases, the mean element evolution predicted by the MEE-based theory shows close agreement with numerical integration. Over week-long propagation intervals, relative position errors remain small, while computational cost is substantially reduced compared to Cowell integration. These results establish the MEE-based analytical framework as both theoretically rigorous and practically effective, providing a foundation for accurate, efficient, and globally valid orbit prediction.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of Transonic Fan Response to Inlet Distortion</title>
<link href="https://hdl.handle.net/1721.1/165171" rel="alternate"/>
<author>
<name>Levy, Benjamin Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/165171</id>
<updated>2026-03-17T03:06:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterization of Transonic Fan Response to Inlet Distortion
Levy, Benjamin Adam
This thesis seeks to characterize transonic fan response to three-dimensional inlet flow distortion, which is a challenge of business jet propulsor-airframe integration. The specific context is ensuring fan operability in crosswind while retaining high cruise efficiency. A body force approach is used with a pre-processing workflow that simplifies the inputs of the body force model. This enables rapid assessment of changes to the fan work distribution, a step towards achieving potential benefits of fan-inlet co-optimization. The workflow is used to explore the sensitivities of fan response to an applied non-uniformity, to fan work distribution, and to bulk swirl. Incidence, as a metric for evaluating distortion, is found to offer an improved assessment of fan operability trends compared to metrics that only depend on the stagnation pressure distribution. Such metrics are not found to capture sensitivities of fan response to increasing circumferential extent of the stagnation pressure defect. Sensitivity of the local response of the fan in the low stagnation pressure region to the radial work distribution are dominated by effects seen in 2D distortions: steeper local pressure ratio characteristics increase the attenuation of the stagnation pressure non-uniformity. However, such designs generate more severe stagnation pressure non-uniformities downstream of the rotor at other spanwise positions due to radial variations in the distortion pattern and rotor pressure rise. The effect of bulk swirl on the characteristic slope produces coupling of stagnation pressure and swirl, where combined counter-swirl and stagnation pressure distortion is found to produce more severe fan operability penalties than the superposition of each separate effect. The characterization of inlet distortion response contributed by this thesis is a necessary step in optimizing the propulsor inlet design with constraints on off-design operability.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Sketch to CAD Code: Multimodal AI for Controllable Design Generation</title>
<link href="https://hdl.handle.net/1721.1/165165" rel="alternate"/>
<author>
<name>Man, King Yiu Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/165165</id>
<updated>2026-03-17T03:06:38Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From Sketch to CAD Code: Multimodal AI for Controllable Design Generation
Man, King Yiu Brandon
Generative artificial intelligence (AI) has demonstrated transformative potential in creative and technical fields, yet its application to engineering design remains underdeveloped. Unlike domains where AI outputs can be directly consumed, engineering design demands integration across heterogeneous tools, multimodal data, and highly structured workflows. This thesis develops and evaluates AI-driven approaches for enabling copilot-style systems that assist engineers throughout the early stages of design, where decisions have the greatest impact on cost and performance. We identify and address three central challenges: the need for user control over abstract generative processes, the scarcity of high-quality engineering datasets, and the complexity of integrating AI into diverse design toolchains. Our first contribution is Sketch2Prototype, a multi-stage framework that transforms conceptual sketches into text, images, and manufacturable 3D meshes. Evaluated on a dataset of 1,087 sketches, the system produces more diverse and manufacturable prototypes than direct sketch-to-3D methods, while enabling iterative refinement through a controllable intermediate text stage. Our second contribution is VideoCAD, a synthetic dataset of over 41,000 annotated CAD modeling videos—up to twenty times longer in action horizon than prior UI agent datasets—capturing pixel-precise, long-horizon interactions in a professional CAD environment. We benchmark state-of-the-art behavior cloning models and large language models on VideoCAD, and introduce VideoCADFORMER, a transformer-based architecture that achieves superior performance on long-horizon CAD action prediction. Finally, we present VisionCAD, a fine-tuned Large Language Model (LLM) that constructs CAD Generation code from point cloud and image data, trained with a dataset of over two million image, point cloud, and CADQuery triplets. Together, these contributions demonstrate that generative AI, multimodal learning, and large-scale dataset generation can be combined to accelerate design exploration, improve manufacturability, and integrate seamlessly into engineering workflows. By addressing both the data and workflow bottlenecks, this work lays the foundation for AI copilots that enhance productivity, creativity, and precision in engineering design.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FrED and the FrED Factory:&#13;
The MIT Approach to Designing a Smart Learning Factory</title>
<link href="https://hdl.handle.net/1721.1/165164" rel="alternate"/>
<author>
<name>Bradley, Russel</name>
</author>
<id>https://hdl.handle.net/1721.1/165164</id>
<updated>2026-03-17T03:06:07Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">FrED and the FrED Factory:&#13;
The MIT Approach to Designing a Smart Learning Factory
Bradley, Russel
It’s hard to learn manufacturing without being in a factory. Existing manufacturing education approaches—including educational kits, machine shops, and learning factories—often fail to capture the natural variability and the flow of products, processes, and people inherent in volume production, which drive the dynamics of real manufacturing systems. This paper/thesis talks about the design and development of the learning factory at MIT, also known as the FrED Factory. FrED Factory is a fully operational factory within campus that produces and delivers manufacturing education kits, Fiber Extrusion Device (FrED), while simultaneously delivering education. The combination of a learning factory producing learning products creates a unique ecosystem of manufacturing education. The FrED and the FrED Factory ecosystem have impacted learners with authentic learning experiences. Project-based learning experiences are delivered through groups of students working to develop FrED and the FrED Factory. The products of this development, the learning device and learning factory, amplify impact by serving as platforms for manufacturing education. The FrED and FrED Factory initiative has impacted learners from K-12, undergraduate, graduate, and professional education at MIT and beyond.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing CMOS Devices for Use in Future X-Ray Astrophysics Instruments</title>
<link href="https://hdl.handle.net/1721.1/165162" rel="alternate"/>
<author>
<name>Lupo, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/165162</id>
<updated>2026-03-17T03:06:05Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterizing CMOS Devices for Use in Future X-Ray Astrophysics Instruments
Lupo, Jonathan
Complementary Metal-Oxide-Semiconductor (CMOS) detectors and Charge-Coupled Devices (CCDs) are the two primary imaging technologies used in optical and X-ray detection. Both rely on pixel arrays that convert incoming photons into electrical charge but differ in readout architecture: CCDs shift charge across the array to a common output node, while CMOS devices incorporate amplifiers and readout circuitry at each pixel. CCDs have long been favored in astronomy for their high sensitivity, low noise, and deep depletion regions that enhance detection of higher-energy X-rays. However, they suffer from slow readout, high power demands, and susceptibility to radiation-induced charge transfer losses. CMOS detectors, in contrast, offer fast readout, low power consumption, and increased resilience in radiation environments, while enabling on-chip processing and high time resolution. These advantages make CMOS increasingly attractive for astrophysical applications, particularly in capturing faint, transient, or rapidly varying X-ray phenomena. This work evaluates the potential of two modified commercial CMOS detectors from Sony’s uEye SE series, the IMX226 and IMX662, for low- to intermediate-energy X-ray astrophysics. To enhance sensitivity, the optical windows were removed and, for the IMX226, the microlens array was eliminated to reduce absorption at low energies. The detectors were characterized at the MIT Kavli Institute X-ray Detector Lab, with performance evaluated in terms of X-ray response, readout noise, pixel-to-pixel gain variation, linearity, dark current, and contributions to overall energy resolution. Detector testing used X-ray emission lines from Polonium-250 and Iron-55 at 277 eV (C), 677 eV (F), 5.9 keV (MnKa), and 6.4 keV (MnKb). Measurements were performed in a vacuum chamber to minimize absorption, with optical linearity tested separately on an optical assembly setup using an integrating sphere. Both detectors showed strong potential as low-cost X-ray sensors, with energy resolutions approaching theoretical limits across key emission lines. Readout noise was low (2.28 e⁻ for IMX226, 3.54 e⁻ for IMX662), gain variation was minimal when measured (≤0.32%), and linearity remained stable with errors below 0.6% across high- and low-energy regimes. Dark current was negligible for the IMX662 and modest for the IMX226 (0.57 e⁻/pixel/sec). While readout noise and gain variation explain much of the measured energy resolution, additional unaccounted noise was observed, indicating that further optimization is required.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vision-Language Models for Engineering Design:&#13;
From Technical Documentation Benchmarking to CAD&#13;
Generation</title>
<link href="https://hdl.handle.net/1721.1/165151" rel="alternate"/>
<author>
<name>Doris, Annie Clare</name>
</author>
<id>https://hdl.handle.net/1721.1/165151</id>
<updated>2026-03-17T03:05:57Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Vision-Language Models for Engineering Design:&#13;
From Technical Documentation Benchmarking to CAD&#13;
Generation
Doris, Annie Clare
Engineering product development is slowed by two bottlenecks: interpreting technical requirements and producing accurate, editable computer-aided design (CAD) models. This thesis evaluates and advances vision-language models (VLMs) – large-scale foundation models that process both text and images – to support engineers in these time-consuming tasks. While benchmarks exist for evaluating VLM performance in areas such as medical imaging, optical character recognition, and robotics, benchmarks for engineering design tasks remain scarce. We develop DesignQA, which remedies this problem, by combining visual data, textual design requirements, CAD images, and engineering drawings in a benchmark. It enables us to rigorously quantify the VLMs’ abilities to understand and apply engineering requirements in technical documentation. Developed with a focus on real-world engineering challenges, DesignQA uniquely combines visual data – including textual design requirements, CAD images, and engineering drawings – derived from the Formula SAE student competition. The benchmark features automatic evaluation metrics and is divided into segments – Rule Comprehension, Rule Compliance, and Rule Extraction – based on tasks that engineers perform when designing according to requirements. We evaluate state-of-the-art models (at the time of writing) like GPT-4o, GPT-4, Claude-Opus, Gemini-1.0, and LLaVA-1.5 against the benchmark. Our study uncovers the existing gaps in VLMs’ abilities to interpret complex engineering documentation, including the inability to reliably retrieve relevant rules from the Formula SAE documentation and challenges in analyzing engineering drawings. These findings underscore the need for VLMs that can better handle the multifaceted questions characteristic of design according to technical documentation. After establishing an engineering-design-specific benchmark, we investigate whether additional training can improve VLM performance on engineering tasks. In particular, we address CAD generation from images, a problem motivated by scenarios such as sketch-toCAD workflows, recovery of lost files, or cases where only an image is available due to privacy concerns. While recent developments in AI-driven CAD generation show promise, existing models are limited by incomplete representations of CAD operations, an inability to generalize to real-world images, and low output accuracy. We develop CAD-Coder, an open-source VLM fine-tuned to generate CadQuery code directly from images, trained on GenCAD-Code (163,671 image–code pairs). On a 100-sample test subset, CAD-Coder outperforms strong VLM baselines (e.g., GPT-4.5, Qwen2.5-VL-72B), achieving a 100% valid-syntax rate and the highest 3D-solid similarity. It also shows early generalization, producing CAD code from real photographs and executing operations (e.g., filleting) not seen during fine-tuning. The performance and adaptability of CAD-Coder highlight the potential of VLMs fine-tuned on design-specific tasks to streamline workflows for engineers. We conclude with directions for design-specific VLMs, including synthetic-data pipelines to improve dataset coverage and reinforcement-learning strategies that exploit objective geometric rewards. Together, DesignQA and CAD-Coder indicate a practical path toward VLM assistants that accelerate requirement-aware engineering design and image-to-CAD workflows. All code, data, and trained models are released publicly to support reproducibility and future research.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-Augmented CAD Onboarding: A Personalized Approach to Reducing Learning Friction</title>
<link href="https://hdl.handle.net/1721.1/165144" rel="alternate"/>
<author>
<name>Aiouche, Nada</name>
</author>
<id>https://hdl.handle.net/1721.1/165144</id>
<updated>2026-03-17T03:06:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">AI-Augmented CAD Onboarding: A Personalized Approach to Reducing Learning Friction
Aiouche, Nada
Autodesk Fusion is a leading cloud‑based CAD platform, yet new users often face steep learning curves due to scattered resources, inconsistent guidance, and a lack of personalization. This thesis addresses these challenges and proposes an adaptive AI assistant, embedded within Fusion, as a potential solution to streamline onboarding, reduce search time, surface hidden tools, and deliver guidance tailored to the user’s learning style. By centralizing learning support within the design environment, the proposed system aims to reduce cognitive load and keep users focused on productive work rather than on searching for help. Based on surveys, interviews, and controlled user testing comparing tasks with and without simulated AI support, the study suggests that personalized, context‑aware assistance can improve task flow, reduce frustration, and provide particular benefits for beginners. Findings indicate that such a solution not only accelerates skill acquisition but also supports long‑term engagement by making the early stages of learning more intuitive and less discouraging. Finally, this thesis outlines practical next steps Autodesk can take to develop, integrate, and validate such a system to realize its full potential in accelerating adoption, improving retention, and enhancing the overall user experience.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of a Low-Cost Bioreactor System for Synechocystis sp. PCC 6803: Integrated Cultivation, Lysis, and Filtration for Sustainable Glucose&#13;
Harvesting</title>
<link href="https://hdl.handle.net/1721.1/165143" rel="alternate"/>
<author>
<name>Baho, Ingie</name>
</author>
<id>https://hdl.handle.net/1721.1/165143</id>
<updated>2026-03-17T03:06:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design and Implementation of a Low-Cost Bioreactor System for Synechocystis sp. PCC 6803: Integrated Cultivation, Lysis, and Filtration for Sustainable Glucose&#13;
Harvesting
Baho, Ingie
This thesis describes the design, modeling, and fabrication of a three-part bioreactor and biomass processing system designed to cultivate Synechocystis sp. PCC 6803 and extract its intracellular glucose. The resulting glucose can support sustainable biomanufacturing for diverse downstream applications, including serving as a feedstock for K. rhaeticus to produce cellulose, as a precursor for biofuel production, or as an ingredient in food supplements. The system incorporates a photobioreactor, a lysis module for acid and ultrasound-based cell disruption, and a pressure-driven filtration setup. The photobioreactor was equipped with a pH, dissolved oxygen, and temperature probe; and optical density was continuously monitored using a custom-built module. The lysis unit contained an ultrasound, a pH, and temperature probe in addition to pumps connected to acid and base chambers. The filtration unit was connected to a compressed air tank and designed with a pressure control valve, safety valve, and syringe filter. Glucose concentration was quantified offline using high-performance liquid chromatography (HPLC). Various light regimes were tested, and under an incident light intensity of approximately 400 µmol m⁻² s⁻¹ at a color temperature of 6500 K, cultures were shown to reach a biomass productivity of 90 mg L⁻¹ day⁻¹, with a specific growth rate of 0.166 day⁻¹ and glucose concentrations up to 5.08 mg L⁻¹. Innovative culture strategies were explored at a small scale, including the cultivation of Synechocystis sp. PCC 6803 in spent K. rhaeticus media to promote economic and sustainable media recycling. When supplemented with additional nutrients, the spent media supported Synechocystis growth up to an OD680 of 0.5. To further characterize the photobioreactor and expected growth based on environmental parameters, both mathematical and machine learning models were built. While the mathematical models were not experimentally validated, the machine learning model model achieved a strong predictive accuracy with a mean absolute error and variance of 0.0009±0.0003 over a 10-fold cross-validation. The system demonstrates up to 65% reduction in cost compared to commercial alternatives.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Passive, Air-Based Squeeze Film Damping for Kinematic Couplings</title>
<link href="https://hdl.handle.net/1721.1/165142" rel="alternate"/>
<author>
<name>Gazdus, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/165142</id>
<updated>2026-03-17T03:06:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design of Passive, Air-Based Squeeze Film Damping for Kinematic Couplings
Gazdus, Hannah
In precision machine design, kinematic couplings are a common choice for aligning and fixturing parts due to their high repeatability. Their centering ability, along with their high stiffness from hertzian contact, enables kinematic couplings to minimize errors. Although kinematic couplings are applied in dynamic situations such as machining, they are currently designed using only static methods with little regard to vibration-induced error. Machine designers thus do not fully understand how kinematic couplings will behave in situ and do not take advantage of easily applicable damping methods to minimize vibration-induced error. This thesis provides a framework for dynamically modeling kinematic couplings with air-based squeeze film damping. This method of damping takes advantage of the inherent air layer between the top and bottom plates of a kinematic coupling; being so simple to leverage, this work advocates for the inclusion of such damping in every kinematic coupling. This work demonstrates that squeeze film damping can increase a coupling’s damping over 100X, significantly raising dynamic stiffness and reducing vibration-induced error. This work’s design principles will allow for more rigorous and thorough development of kinematic couplings, which is especially necessary for applications where vibration-induced errors must be minimized.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D-Printed Tangential Flow Filtration and High-Throughput Microfluidic Electroporation for Scalable Microbial Processing</title>
<link href="https://hdl.handle.net/1721.1/165132" rel="alternate"/>
<author>
<name>Cui, Yuhe</name>
</author>
<id>https://hdl.handle.net/1721.1/165132</id>
<updated>2026-03-17T03:06:44Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">3D-Printed Tangential Flow Filtration and High-Throughput Microfluidic Electroporation for Scalable Microbial Processing
Cui, Yuhe
Bacterial transformation via electroporation is fundamental to modern biotechnology applications including therapeutic protein production, biomaterial synthesis, and agricultural enhancement. However, conventional electroporation workflows face critical bottlenecks that limit their scalability and industrial applicability; mainly inefficient electrocompetent cell preparation and low-throughput transformation processes.&#13;
This thesis presents two complementary 3D-printed technologies that independently address these limitations for scalable microbial processing. First, a novel spiral channel tangential flow filtration (TFF) system was developed that replaces conventional centrifugation-based methods for preparing electrocompetent cells. The spiral geometry enhances mixing dynamics and enables continuous washing of bacterial cultures, dramatically reducing preparation time while improving cell recovery compared to traditional centrifugation and membrane filtration approaches that suffer from time constraints, labor intensity, and membrane fouling.&#13;
Second, a 3D-printed microfluidic electroporation platform featuring geometry-optimized electric field distribution was designed. Building upon established M-TUBE principles, the bilaterally converged channel architecture creates localized field enhancement at reduced applied voltages, enabling high-efficiency transformation of larger cell volumes. This design overcomes the throughput limitations of conventional cuvette-based systems that require manual handling and process only small volumes.&#13;
Both technologies leverage additive manufacturing to create cost-effective alternatives to traditional protocols. Computational fluid dynamics simulations and experimental validation demonstrate significant improvements in processing time, transformation efficiency, and throughput compared to conventional methods. These complementary technologies demonstrate the potential for future integration into a complete workflow for scalable microbial transformation, with promising implications for broader implementation in industrial biotechnology, synthetic biology, and large-scale research applications.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Digital Threads Across Engineering Organizations: A Q-methodology Analysis of Challenges with a Novel Factor Selection Approach</title>
<link href="https://hdl.handle.net/1721.1/165131" rel="alternate"/>
<author>
<name>Kong, Kanglin</name>
</author>
<id>https://hdl.handle.net/1721.1/165131</id>
<updated>2026-03-17T03:06:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Integrating Digital Threads Across Engineering Organizations: A Q-methodology Analysis of Challenges with a Novel Factor Selection Approach
Kong, Kanglin
This research investigates the integration of digital tools across design, manufacturing, and quality management functions through a cross-industry analysis informed by Q-methodology. Despite considerable investments in digital transformation, many manufacturing organizations face persistent gaps between design intent and production execution, exacerbated by fragmented digital threads, limited adoption of Model-Based Definition, and the continued reliance on manual, error-prone workflows. Through qualitative interviews and quantitative Q-sort analyses conducted among participants across diverse industries, this study identifies key patterns of pain points and solutions perceived differently by stakeholder groups. It reveals insights into how variations in industry characteristics influence digital maturity, particularly regarding the adoption and integration of Product Lifecycle Management, Model-Based Enterprise, and Design for Manufacturability practices. Findings underscore the critical role of enhancing digital thread connectivity, ensuring early integration of manufacturability feedback, embedding automated cost analytics, and facilitating supplier readiness for full MBD adoption. Furthermore, the research highlights the necessity of strategic organizational change management alongside technological advancements. This work provides a nuanced understanding of organizational perceptions and identifies tangible pathways toward a cohesive, software-driven approach for bridging gaps among engineering functions, thereby informing future strategies for manufacturers and software vendors alike.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human-in-the-Loop Task Directed Exploration and&#13;
Planning in Unknown Environments</title>
<link href="https://hdl.handle.net/1721.1/165129" rel="alternate"/>
<author>
<name>Jois, Aneesh Ramesh</name>
</author>
<id>https://hdl.handle.net/1721.1/165129</id>
<updated>2026-03-17T03:06:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Human-in-the-Loop Task Directed Exploration and&#13;
Planning in Unknown Environments
Jois, Aneesh Ramesh
For robots to perform everyday tasks autonomously, like humans, they should be able to perceive, explore and act in novel environments while pursuing high level goals. This capability is known as task-directed exploration, and is essential in domains ranging from household assistance robots to disaster response. However, existing approaches each fall short in fulfilling the task directed exploration problem. Classical symbolic planners require brittle, hand crafted domain models and assume complete knowledge of the environment. POMDP based formulations provide a principled approach to planning under uncertainty but are computationally intractable in large, open world settings. Foundation models such as large language models (LLMs) and vision language models (VLMs) offer strong commonsense knowledge and pattern recognition capabilities but lack the structured spatial grounding and adaptivity required for embodied execution. This thesis presents a unified framework that closes this gap by tightly integrating foundation models with a real time semantic mapping and planning stack. The system consists of four components: (i) a dual layer perception module that combines a deterministic 3D scene graph with a frontier based probabilistic belief field, using vision language models for object labeling and large language models for room classification; (ii) a symbolic task planner that converts natural language instructions into high level activity plans; (iii) an exploration executive that selects informative waypoints, monitors task progress, and dynamically triggers replanning and human queries; and (iv) a unified value of information (VoI) metric that governs both autonomous exploration and selective human interaction, enabling the robot to reason about uncertainty and task utility in a principled way. Demonstrated in realistic simulated environments, the proposed framework allows agents to ground natural language goals in their surroundings, explore efficiently, reason over partial knowledge, and adapt plans as new information is acquired, while involving the user only when doing so meaningfully improves performance.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-Based Design of an Indirectly Irradiated Thermochemical Hydrogen Production Reactor Capable of Radiant Heat Recovery</title>
<link href="https://hdl.handle.net/1721.1/165125" rel="alternate"/>
<author>
<name>Scott, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/165125</id>
<updated>2026-03-17T03:05:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Model-Based Design of an Indirectly Irradiated Thermochemical Hydrogen Production Reactor Capable of Radiant Heat Recovery
Scott, Peter
Renewable/green hydrogen is of great interest as an alternative fuel for decarbonizing sectors such as shipping, aviation, chemicals, and heavy industry. The high cost of green hydrogen through electrolysis, an off-the-shelf mature technology, has led researchers to explore alternative water-splitting methods including thermochemistry, which can also be used for cosplitting of H2O and CO2 to produce syngas that can be converted to liquid fuels. Moreover, the process can operate on stored high temperature heat, making 24/7 operation possible. This thesis focuses specifically on the two-step thermochemical redox cycle using non-stoichiometric metal oxides. While the process has been demonstrated at the lab and pilot scales, efficiencies have so far been limited by the large temperature swing between the reduction and oxidation conditions, resulting in high sensible heat losses. In our previous work, we have introduced the Reactor Train System (RTS), a concept that features multiple and identical individually sealed, indirectly irradiated, metal oxide-containing reactors which move between a hot reduction zone and a cooler oxidation zone, engaging in counterflow radiative heat recovery in between. Prior modeling of the RTS, which revealed promise for high efficiency and heat recovery effectiveness, used either zero- or one-dimensional models of the RTS reactors and assumed a basic reactor design that featured a sapphire window for radiative heat transfer between the source and the redox material. A detailed conceptual design and higher-fidelity modeling of the RTS reactors is the focus of this thesis. This thesis is a comprehensive documentation of the model-based iterative design process of a novel thermochemical hydrogen reactor with highly unique and challenging functional requirements, from initial concept to early prototyping. The primary engineering challenge is that the structural pressure vessel also acts as the heat transfer interface, and must serve both purposes while undergoing extreme thermal cycling. The original windowed reactor concept is first investigated using a radiative heat transfer model, with findings of unfavorable heat losses and concerns regarding practicality guiding us towards a reactor design using a fully ceramic vessel acting also as a heat transfer interface. A more advanced thermomechanical model was then used to select a geometry which we call the Multi-Tubular Radiative Recovery Reactor (MiTR3 ) instead of one larger ceramic vessel, and to study the design parameters of the MiTR3 such as tube wall thickness with critical insight into the stress and failure probability of the ceramic tubes. Besides its mechanical strength and favorable thermal properties, this design is scalable and adaptable to different operating conditions and redox materials. Moreover, it utilizes easy to assemble off-the-shelf components. We then further augmented our modeling capabilities with multidimensional, time-dependent thermo-fluid and chemical reaction physics, incorporating both reduction and oxidation kinetics into the conservation equations for full-cycle simulations using ceria as the metal oxide. This enabled further study of the impact of important parameters, especially operational parameters such as redox material loading and form factor, gas flow rates, etc., and a deeper understanding of realistic system level efficiencies and productivities that take into consideration the impact of auxiliary components such as vacuum pumping and gas separation technologies on both. Finally, our ongoing experimental work with a benchtop-scale, single-tube reactor prototype aimed at derisking components and validating modeling results is presented, alongside plans for future prototyping efforts.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Probabilistic Dynamically-Orthogonal Primitive Equation Forecasts for the Gulf of Mexico</title>
<link href="https://hdl.handle.net/1721.1/165123" rel="alternate"/>
<author>
<name>Rodriguez, Victor Alonso</name>
</author>
<id>https://hdl.handle.net/1721.1/165123</id>
<updated>2026-03-17T03:06:36Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards Probabilistic Dynamically-Orthogonal Primitive Equation Forecasts for the Gulf of Mexico
Rodriguez, Victor Alonso
Forecasting circulation in the Gulf of Mexico requires an explicit treatment of uncertainty associated with the Loop Current and its eddies, whose geometry and timing can fluctuate irregularly and lead to chaotic deterministic forecasts. Building on the dynamically orthogonal (DO) methodology for evolving low-rank stochastic representations and on efficient DO numerical schemes for geophysical fluid flows, this thesis develops and assesses massive probabilistic Primitive Equation (PE) hindcasts for the Gulf using the Dynamically Orthogonal Primitive Equations (DO–PE) framework as implemented for realistic ocean dynamics in previous MIT-MSEAS studies. The workflow extracts a time-dependent stochastic subspace from a balanced MIT MSEAS PE ensemble via singular-value decomposition, represents the initial nonGaussian coefficient cloud with Gaussian mixture models, and subsequently evolves the DO–PE mean, modes, and coefficients under dynamics, numerics, and forcings consistent with the MIT MSEAS PE modeling system. A 12-day hindcast simulation experiment spanning 28 May–8 June 2015 quantifies skill and convergence across truncations, with weak-type tests (means, standard deviations, kernel-density marginals) and strong-type tests against matched full-order realizations started from identical initial states. Consistent patterns emerge. Uncertainty concentrates along the Loop Current jet, the Yucatán inflow, and eddy peripheries. For weak convergence, as the retained dynamic modes increase from 15 to 60, standard-deviation maps sharpen and expand coherently along these dynamically active features, and the statistics indicate convergence with the normalized RMSEs for both mean and standard deviation fields decreasing in a largely monotonic fashion. At depth and for sea-surface height, late-time mean-error behavior can become mildly non-monotonic, indicating sensitivity to mode allocation among variables. In strong-convergence experiments, DO–PE reconstructions initialized at coefficient quantiles closely track the corresponding full-order trajectories: pathwise misfits remain modest, organize along shear zones, and their RMSE time series lie below persistence and within the envelopes implied by the weak-type spread, reinforcing that truncation primarily filters small-scale content while preserving trajectory-level evolution over the 10–12-day window. Together, these results demonstrate a practical, reproducible pipeline for massive probabilistic forecasting in the Gulf of Mexico that respects PE dynamics while quantifying and localizing forecast uncertainty in flow-dependent ways (details, configuration, and figures in Chapters 3–4). This thesis also introduces dynamic web pages for the interactive visualization of DO–PE output, facilitating the inspection of mean fields, modes, and standard deviations over time in Chapter 5.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization and Control of Sorption-Based Atmospheric Water Harvesting Devices</title>
<link href="https://hdl.handle.net/1721.1/165121" rel="alternate"/>
<author>
<name>Čas, Jan Luka</name>
</author>
<id>https://hdl.handle.net/1721.1/165121</id>
<updated>2026-03-17T03:06:12Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimization and Control of Sorption-Based Atmospheric Water Harvesting Devices
Čas, Jan Luka
Water scarcity is a global challenge with only one-third of the world’s population having consistent access to clean drinking water. Atmospheric water harvesting is a promising approach owing to the significant amount of water, i.e., 13000 trillion liters, present in the atmosphere. While significant recent research has focused on developing innovative sorbent materials, components and system designs, there is limited understanding of how to optimize device performance through active control. Key operating parameter selection, specifically, desorption temperature and cycle length, has relied on experimental trial and error. In this thesis, model predictive control (MPC) was used to dynamically optimize power input and cycle time in atmospheric water harvesting devices, for the first time. Real time optimization using a custom defined cost function was achieved based on a simplified heat and mass transfer model. The model allowed for the cost function to be based on water output and therefore eliminated the need for setpoint definition a priori. Through a modular, customizable software and hardware stack, the device demonstrated reliability and maintainability while preserving user interaction. MPC was evaluated against five distinct sorbent isotherm types, using three distinct operating modes: maximizing water production, maximizing operational profit and increasing thermal efficiency. All modes outperformed a constant temperature setpoint by dynamically determining the appropriate end time of the cycle, which depending on the material varied up to 10,000 s. Furthermore, the controller was able to increase thermal efficiency up to 3 percentage points compared to the reference by dynamically tapering power input to match water production. Experimental validation was performed with a device built by the Device Research Laboratory. The results showed excellent agreement between measured water output and real-time prediction, which provides a viable strategy for future controller deployment. This work paves a way for more sophisticated device operation through real-time optimization of power input and cycle length and highlights a modular software and hardware design to realize high performance atmospheric water harvesting devices.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Heuristic-Based Framework for Cost-Effective Product Design Enabled by Manufacturing Automation: Application to Large-Scale Sheet Metal Structures</title>
<link href="https://hdl.handle.net/1721.1/165120" rel="alternate"/>
<author>
<name>Flores Medina, Enrique</name>
</author>
<id>https://hdl.handle.net/1721.1/165120</id>
<updated>2026-03-17T03:06:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Heuristic-Based Framework for Cost-Effective Product Design Enabled by Manufacturing Automation: Application to Large-Scale Sheet Metal Structures
Flores Medina, Enrique
Sheet metal is prominent as a raw material for fabrication due to its flexible nature. Through cutting, bending, and joining, it can take a plethora of shapes, explaining its vast adoption in the construction, automotive, and aerospace industries. Furthermore, with automation, the labor and human error associated with its manufacturing can be mitigated. Nonetheless, the versatility of sheet metal can fade under the non-trivial dimensional and thickness constraints of some automated processes, particularly bending. This research, conducted in the context of a large-scale sheet metal manufacturer offering high customization, aims to maximize sheet metal’s automation capabilities while retaining its flexibility. To achieve this, two approaches are used: 1) the adoption of rollformed steel profiles with automated tube laser cutting as an additional manufacturing value stream, and 2) the development of a design automation tool that, upon receiving the dimensions and structural load conditions of a rectangular prism (called sub-module), generates a low-cost, automation compliant design. Findings show that optimal modules generally use medium to low-gauge channels as connected structural members, and thin-gauge sheet metal panels as slabs and shear walls, minimizing material use: the main cost component. Generated designs show cost reductions of up to 32% when compared to legacy counterparts. For the most produced product, this translates to yearly cost savings that range from $1.7 to $5.2 million.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Evaluation of a Bionic Knee with Myoneural Control</title>
<link href="https://hdl.handle.net/1721.1/165119" rel="alternate"/>
<author>
<name>McCullough, John A.</name>
</author>
<id>https://hdl.handle.net/1721.1/165119</id>
<updated>2026-03-17T03:06:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design and Evaluation of a Bionic Knee with Myoneural Control
McCullough, John A.
Building bionic limbs requires the convergence of surgical innovation and robotic engineering: surgical constructs must reliably extract and amplify intent signals from the body, while robotic systems must accurately interpret these signals to deliver precise, responsive assistance and meaningful feedback. Individuals with above-knee amputation often experience reduced mobility and diminished agency when using conventional prosthetic devices. These limitations can impede human-prosthesis embodiment, the integration of the prosthesis into the user’s body schema.&#13;
This thesis advances the goal of seamless human-machine integration by presenting the design and evaluation of a powered knee prosthesis. The hardware, software, and embedded systems of a prior prototype were upgraded to create a modular, field-deployable research platform. The resulting system incorporates a control framework that enables volitional actuation of the knee joint via electromyographic signals recorded from surgically reconstructed agonist-antagonist muscle pairs.&#13;
To evaluate the system, one participant with an above-knee amputation completed a series of experimental tasks using both their prescribed microprocessor-controlled prosthesis and the bionic knee. Neural control performance was assessed through blindfolded free-space tasks, while functional capability was evaluated during sit-to-stand transitions, squatting, level-ground walking, and stair ascent.&#13;
The bionic prosthesis, weighing 2.6 kg, comparable to commercially available powered knees, demonstrated robust, real-time control across all tasks. Volitional neural inputs enabled intuitive and responsive joint actuation, resulting in superior performance and perceived embodiment relative to the passive device. During sit-to-stand and squatting tasks, ground reaction force data revealed increased weight-bearing on the prosthetic side, reflecting enhanced user confidence. Gait analysis showed improved temporal symmetry during walking with the bionic knee, indicating more balanced interlimb coordination. Embodiment scores were consistently higher across all measured domains, with the participant describing the prosthesis as “feeling like my leg” and “helping me.”&#13;
These findings underscore the potential of neurally integrated prosthetic systems to restore volitional control, improve functional performance, and promote a more embodied user experience.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for in-Space Robotic Assembly of Modular CubeSats</title>
<link href="https://hdl.handle.net/1721.1/165118" rel="alternate"/>
<author>
<name>Freitag, Leila</name>
</author>
<id>https://hdl.handle.net/1721.1/165118</id>
<updated>2026-03-17T03:06:02Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Planning for in-Space Robotic Assembly of Modular CubeSats
Freitag, Leila
As the space industry continues to grow, developments such as the proliferation of small satellites have lowered the barrier to entry to space, making it faster and easier to launch payloads into orbit. However, the need for rapid deployment in space remains, particularly for rapid replacement of satellites that are nodes in larger constellations or supporting time-sensitive missions such as natural disaster monitoring. On-orbit assembly provides a solution to meet this demand. This thesis describes the development of Orbital Locker, a robotic system designed to enable the autonomous in-space assembly and deployment of modular satellites. The concept of operations involves a free-flying satellite that acts as a storage ``locker'', carrying modular CubeSat components and assembling and deploying them on request. Orbital Locker is an initial small-scale demonstration that is intended to be scaled up, consisting of a Cartesian gantry robot, and CubeSat modules dimensioned such that three modules stack to form a 1U CubeSat. The focus of this thesis is the software architecture of the system including module identification and assembly planning, and assembly testing in microgravity. Module identification makes use of fiducial markers to localize modules within the Locker, tracking the inventory of parts available. The assembly planner uses a graph-based method to optimize the steps required to assemble a desired satellite. It first generates a graph representation of possible assembly states and then uses a graph search algorithm to find the optimal sequence. Results from microgravity testing of the autonomous assembly on a ZeroG flight are presented, where a 1U CubeSat form factor was assembled in 72 seconds. Throughout this work, emphasis is placed on the extensibility of the system to support future scaled-up systems containing a larger inventory of modules.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human Factors Observations in Flightcrew Response to&#13;
System Failure Events in Transport Category Aircraft&#13;
from 2000 to 2024</title>
<link href="https://hdl.handle.net/1721.1/165117" rel="alternate"/>
<author>
<name>Perez Gago, Cecilia</name>
</author>
<id>https://hdl.handle.net/1721.1/165117</id>
<updated>2026-03-17T03:05:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Human Factors Observations in Flightcrew Response to&#13;
System Failure Events in Transport Category Aircraft&#13;
from 2000 to 2024
Perez Gago, Cecilia
Understanding the effects of changes in aircraft technology on pilot response to system failure is crucial in the context of recent aviation safety events. This thesis makes human factors observations on pilot response to system malfunction in transport category aircraft through an analysis of final investigation reports produced by investigative authorities worldwide from 2000-2024. In the collected reports, system failure events in aircraft of newer generations correlated with higher percentages of appropriate response. Pilot response appropriateness was found to vary between systems, with particularly low appropriate response to failure of instruments and navigation, fuel, and autoflight systems (in decreasing order). When comparing the findings from the 2000-2024 data collection to those from a 1990-2000 study, pilot appropriate response was found to have increased for failures of the hydraulic and electrical systems. Pilot response to instruments and navigation, and autoflight failures was found to be low in both studies. Crew Alerting System (CAS) messages as initial stimuli for failure awareness were found to support increased levels of appropriate response percentages for failure of the electrical and hydraulic systems. CAS messages did not lead to a substantial improvement in appropriate response to failure of instruments and navigation, fuel, or the autoflight system. Finally, Endsley’s Situation Awareness theory was used as a framework to derive observations in the formulation of pilot responses to system failure across cases. CAS messages and system synoptic displays were observed to contribute to appropriate pilot perception, comprehension, and projection of failure of simple systems. Significant underlying complexity in the function of the autoflight and instruments and navigation systems, and the increased use of sensing, correlated with difficulty in comprehension and projection of system behavior following multiple failure events in 2000-2024 reports. Additionally, examples of failures across systems which displayed delayed or subtle stimuli, and unexpected system dependencies, were observed to lead to difficulties in flightcrew achievement of Level 2 and Level 3 Situation Awareness. Changes in aircraft technology were deemed to have had a varying effect on pilot situation awareness during failure of different airplane systems. Improvements in pilot response were observed in relatively simple systems, and gaps were identified given increased vulnerabilities in failure of systems with high functional complexity.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrative Spatial Technologies for Mapping Axonal Vulnerability in Alzheimer’s Disease</title>
<link href="https://hdl.handle.net/1721.1/165116" rel="alternate"/>
<author>
<name>Leible, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/165116</id>
<updated>2026-03-17T03:06:06Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Integrative Spatial Technologies for Mapping Axonal Vulnerability in Alzheimer’s Disease
Leible, Daniel
Alzheimer’s Disease (AD) is the most common neurodegenerative disorder and is histopathologically defined by the accumulation of extracellular amyloid β (Aβ) plaques and intracellular neurofibrillary tau tangles. Pathology progression in AD follows a highly stereotyped, hierarchical pattern, implying a circuit-specific neuronal vulnerability to the underlying pathophysiological processes. Understanding the molecular and subcellular mechanisms driving this selective vulnerability has the potential to enable targeted, circuit-specific therapeutic approaches for early intervention in the detrimental spread of disease.&#13;
This thesis systematically reviews the current mechanistic understanding of selective vulnerability and early disease development in AD and explores how emerging integrative spatial technologies can address remaining open questions. First, molecular and subcellular processes underlying axonal Aβ and tau accumulation are examined, with a focus on cytoskeletal dynamics and axonal transport deficits. Second, intrinsic structural and metabolic risk factors shared by vulnerable axons are outlined, offering a potential explanation for the early regional onset of pathology. Since AD pathology appears to spread from these initial sites along synaptic connections, mechanisms of transsynaptic propagation of vulnerability are discussed next. Finally, the thesis compares integrative spatial technologies used to map disease progression and proposes neuronal barcoding as a promising strategy to overcome existing limitations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SLAM for Structured Environments Using Mechanically&#13;
Scanned Imaging Sonar</title>
<link href="https://hdl.handle.net/1721.1/165115" rel="alternate"/>
<author>
<name>Motz, Andrew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/165115</id>
<updated>2026-03-17T03:05:58Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">SLAM for Structured Environments Using Mechanically&#13;
Scanned Imaging Sonar
Motz, Andrew J.
As the modern utilization of the maritime environment only grows, uncrewed systems present the future of safety, efficiency, and capability. For submerged operations, Autonomous Underwater Vehicles (AUVs) enable scientists, industry, and militaries to access remote, inhospitable locations and execute a variety of tasks beyond the capabilities of human occupied or operated systems. Much of this autonomy relies on the vehicle having a detailed understanding of its own position. Inertial Navigation Systems provide an estimate of the distance traveled by combining numerous sensors, but are subject to unbounded error accumulation over long distances. Traditional methods of correcting for this error found in terrestrial robotics are largely unavailable in the undersea domain due to the absorption and scattering effects of electromagnetic signals in water. Acoustic communications and imaging such as Sound Navigation and Ranging (SONAR) is the most reliable and trusted method for AUVs. This thesis presents a novel method for performing Simultaneous Localization and Mapping (SLAM) through acoustic means utilizing a Mechanically Scanned Imaging Sonar (MSIS). MSIS utilize a single beam sonar mechanically rotated around the vehicle to scan a full 360◦ area. Compared with other sonar systems of similar capabilities, they require less size, weight, and power, and are available at a lower price point.&#13;
The primary contribution of this thesis is a SLAM processing pipeline from MSIS to global position estimate. The pipeline extracts information from the MSIS data regarding the vehicle’s relative location compared to observed landmarks and then probabilistically matches the observed data to a best estimate vehicle position. The system is compatible with either an a priori map or a constantly updated SLAM global map. Individual beams from the MSIS are fused together into a submap. Contrast-based image processing identifies features of interest in the submap and appropriate features are then classified as observed landmarks. A probabilistic coarse-to-fine voting scheme identifies the most likely pose of the vehicle using the global map. When performing SLAM without an a priori map, observed landmarks are then evaluated and either added to the global map or used to update the position of known landmarks. While prior works have established MSIS SLAM by focusing on a single return per sonar beam, this thesis utilizes submaps to extract numerous features from a series of consecutive beams, allowing for more detailed and comprehensive feature mapping.&#13;
Experimental validation was performed using an ISS360 sonar mounted on a REMUS-100 AUV with the processing pipeline running via Robot Operating System on the vehicle backseat computer. The vehicle was assisted by divers traversing underneath the WHOI Iselin pier and performed both localization and SLAM using the submerged pier pilings. The system performed real-time localization, successfully bounding previously unbounded localization drift to an average of 3.4m, resulting in over a 90% reduction in absolute error after approximately one hour of submerged operations. The SLAM results mirrored the a priori accuracy demonstrating similar error bounds validating the system performance.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational study of static replication for barrier options</title>
<link href="https://hdl.handle.net/1721.1/165075" rel="alternate"/>
<author>
<name>Sun, Hai Po.</name>
</author>
<id>https://hdl.handle.net/1721.1/165075</id>
<updated>2026-03-11T03:04:39Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Computational study of static replication for barrier options
Sun, Hai Po.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1997; Includes bibliographical references (leaves 75-76).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, control and experimentation of a two dimensional linear motor</title>
<link href="https://hdl.handle.net/1721.1/165074" rel="alternate"/>
<author>
<name>Castañeda Vega, José Israel.</name>
</author>
<id>https://hdl.handle.net/1721.1/165074</id>
<updated>2026-03-11T03:04:33Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Modeling, control and experimentation of a two dimensional linear motor
Castañeda Vega, José Israel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1997; Includes bibliographical references (leaf 118).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of anode dimensions in mercury-vapour thermionic rectifiers</title>
<link href="https://hdl.handle.net/1721.1/165073" rel="alternate"/>
<author>
<name>Fussell, Lewis.</name>
</author>
<id>https://hdl.handle.net/1721.1/165073</id>
<updated>2026-03-11T03:04:42Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">A study of anode dimensions in mercury-vapour thermionic rectifiers
Fussell, Lewis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1932; Includes bibliographical references (leaf 50).
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boiling and spreading rates of instantaneous liquid methane spills on water</title>
<link href="https://hdl.handle.net/1721.1/165070" rel="alternate"/>
<author>
<name>Chatlos, David Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/165070</id>
<updated>2026-03-11T03:04:37Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Boiling and spreading rates of instantaneous liquid methane spills on water
Chatlos, David Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1982; Supervised by Robert C. Reid.; Includes bibliographical references (leaves 86-88).
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.</title>
<link href="https://hdl.handle.net/1721.1/165068" rel="alternate"/>
<author>
<name>Wright, Francine Elaine.</name>
</author>
<id>https://hdl.handle.net/1721.1/165068</id>
<updated>2026-03-11T03:04:45Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Application of Fourier transform spectroscopy to the absolute determination of the chemical shift of protons.
Wright, Francine Elaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1975; Vita.; Bibliography: leaves 65-66.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geology of Deception Gulch and the Verde Central mine</title>
<link href="https://hdl.handle.net/1721.1/164923" rel="alternate"/>
<author>
<name>Benedict, P. C.
            (Platt Carrico),
            1900-1969.</name>
</author>
<id>https://hdl.handle.net/1721.1/164923</id>
<updated>2026-02-20T03:04:15Z</updated>
<published>1923-01-01T00:00:00Z</published>
<summary type="text">Geology of Deception Gulch and the Verde Central mine
Benedict, P. C.
            (Platt Carrico),
            1900-1969.
Thesis: M.S., Massachusetts Institute of Technology, Department of Geology and Geophysics, 1923
</summary>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonlinear elastic analysis of reinforced concrete structures by the finite element method</title>
<link href="https://hdl.handle.net/1721.1/164915" rel="alternate"/>
<author>
<name>Tulga, Said Şahin.</name>
</author>
<id>https://hdl.handle.net/1721.1/164915</id>
<updated>2026-02-20T03:04:07Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Nonlinear elastic analysis of reinforced concrete structures by the finite element method
Tulga, Said Şahin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subcontractor bidding strategy</title>
<link href="https://hdl.handle.net/1721.1/164914" rel="alternate"/>
<author>
<name>Gilbane, Thomas Freeman.</name>
</author>
<id>https://hdl.handle.net/1721.1/164914</id>
<updated>2026-02-20T03:04:10Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Subcontractor bidding strategy
Gilbane, Thomas Freeman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1975; Bibliography: leaves 104-105.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Large Language Models as Circuit Design Assistants</title>
<link href="https://hdl.handle.net/1721.1/164861" rel="alternate"/>
<author>
<name>Cox, Matthew J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164861</id>
<updated>2026-02-13T03:49:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Evaluating Large Language Models as Circuit Design Assistants
Cox, Matthew J.
Large language models (LLMs) have exploded in capability in recent years. Previous attempts at AI systems for circuit design have had limited proficiency and been restricted in problem scope. LLMs, with their breadth of knowledge and reasoning ability, are a promising technology for a much more general-purpose circuit design assistant. We developed a dataset of electrical engineering problems and solutions with which to test an LLM-based system, since no such publicly available dataset exists to our knowledge; unmodified GPT-4 was able to solve 42% of the problems. We did a preliminary comparison of several knowledge bases to use for RAG knowledge injection, finding that a small, curated set of resources performed better than a larger, less-focused set of resources, though there were confounding factors which may have skewed the result. While this work is a start, significant future work is needed to continue developing an LLM-based circuit design assistant.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration</title>
<link href="https://hdl.handle.net/1721.1/164860" rel="alternate"/>
<author>
<name>Nguyen, Gary</name>
</author>
<id>https://hdl.handle.net/1721.1/164860</id>
<updated>2026-02-13T03:49:33Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Increasing Program Code Coverage Using Fuzzing and Targeted Branch Exploration
Nguyen, Gary
Code coverage is a longstanding metric for evaluating how thoroughly a program has been tested. Achieving high coverage remains a priority goal for quality assurance and software stability. Exhaustive enumeration of possible input paths to every code region is desirable in theory but computationally infeasible in practice, especially in large-scale codebases. Fuzzing is a widely used technique for input generation and is effective at exploring smaller programs but often struggles with more complex conditional logic and nested modules. Concolic execution, which exhaustively explores paths using constraint solving, can work effectively with complex conditional logic but suffers from path explosion. Targeted branch exploration is a similar approach for input generation but sidesteps the path explosion problem by focusing more on specific constraint paths of interest.&#13;
&#13;
In this thesis, I introduce a hybrid system that combines fuzzing and targeted branch exploration with the goal of improving code coverage by leveraging the complementary strengths of each. The system uses fuzzing to quickly generate a broad input corpus and follows up with targeted branch exploration to explore paths that fuzzing struggles to reach. Findings from experiments on two C projects of different complexities show that the system did not outperform the individual techniques in terms of raw coverage, revealing limitations of the approach and opportunities for future improvement.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry</title>
<link href="https://hdl.handle.net/1721.1/164858" rel="alternate"/>
<author>
<name>He, Wenhao</name>
</author>
<id>https://hdl.handle.net/1721.1/164858</id>
<updated>2026-02-13T03:49:30Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Machine-Learned Representations of Basis Sets and Their&#13;
Application in Quantum Computational Chemistry
He, Wenhao
Quantum simulations of electronic structure promise to deliver significant speedups over classical methods, but remain limited by the number of qubits on near-term devices. A key strategy to reduce quantum resource requirements is to truncate the molecular Hilbert space via compact and efficient basis sets. However, most optimized basis sets either rely on predefined heuristics or require expensive classical computations, such as CASSCF orbital optimization or ℓ1-norm minimization of the Hamiltonian. In this work, we introduce a general machine learning framework for fast basis set prediction in quantum computational chemistry. Our method employs an equivariant graph neural network that outputs a Hermitian matrix encoding optimized molecular orbitals. The eigenvectors of this matrix define a transferable and efficient basis set, trained on orbitals obtained via CASSCF and Hamiltonian ℓ1 norm optimization. We evaluate our model on hydrogen chains and demonstrate that the predicted bases achieve energy accuracy and Hamiltonian sparsity comparable to orbital-optimized methods, while reducing classical preprocessing time. In addition, the predicted orbitals can be directly used as high-quality initial guesses for CASSCF calculations, further accelerating their convergence.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition</title>
<link href="https://hdl.handle.net/1721.1/164856" rel="alternate"/>
<author>
<name>Feng, Dewei</name>
</author>
<id>https://hdl.handle.net/1721.1/164856</id>
<updated>2026-02-13T03:49:31Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">SmellNet: A Large-scale Dataset for Real-world Smell&#13;
Recognition
Feng, Dewei
The ability of AI to sense and identify various substances based on their smell alone can have profound impacts on allergen detection (e.g., smelling gluten or peanuts in a cake), monitoring the manufacturing process, and sensing hormones that indicate emotional states, stress levels, and diseases. Despite these broad impacts, there are virtually no large-scale benchmarks, and therefore little progress, for training and evaluating AI systems’ ability to smell in the real world. In this paper, we use portable gas and chemical sensors to create SmellNet, the first large-scale database that digitizes a diverse range of smells in the natural world. SmellNet contains about 180,000 time steps of 50 substances (spanning nuts, spices, herbs, fruits, and vegetables) with 50 hours of data. Using SmellNet, we trained AI models for real-time classification of substances based on their smell alone. Our best methods leverage sequence models, contrastive learning to integrate high-resolution Gas Chromatography–Mass Spectrometry molecular data, and a new temporal difference method that identifies sharp changes in sensor readings. Our best models achieve up to 65.35% accuracy on pre-recorded data, and generalize to real-world conditions with 10.71% accuracy on nuts and 25.38% on spices in the challenging 50-way online classification task. Despite these promising results, SmellNet highlights many technical challenges in building AI for smell, including richer feature learning, on-edge smell models, and robustness to environmental changes.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization</title>
<link href="https://hdl.handle.net/1721.1/164855" rel="alternate"/>
<author>
<name>Meindl, Jamison Chivvis</name>
</author>
<id>https://hdl.handle.net/1721.1/164855</id>
<updated>2026-02-13T03:49:16Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization
Meindl, Jamison Chivvis
Global optimization of expensive, derivative-free black-box functions requires extreme sample efficiency. While Bayesian optimization (BO) is the current state-of-the-art, its performance hinges on surrogate and acquisition function hyperparameters that are often hand-tuned and fail to generalize across problem landscapes. We present ZeroShotOpt, the first general-purpose, pretrained model for continuous black-box optimization tasks ranging from 2 D to 20 D. Our approach leverages offline reinforcement learning on large-scale optimization trajectories collected from 12 BO variants. To scale pretraining, we generate millions of synthetic Gaussian process-based functions with diverse landscapes, enabling the model to learn transferable optimization policies. As a result, ZeroShotOpt achieves robust zero-shot generalization on a wide array of unseen synthetic and real-world benchmarks, matching or surpassing the sample efficiency of leading global optimizers, including BO, while also offering a reusable foundation for future extensions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes</title>
<link href="https://hdl.handle.net/1721.1/164854" rel="alternate"/>
<author>
<name>Nguyen, Thienan D.</name>
</author>
<id>https://hdl.handle.net/1721.1/164854</id>
<updated>2026-02-13T03:49:26Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Temperature Characterization of Colloidal Quantum Dot&#13;
Light Emitting Diodes
Nguyen, Thienan D.
Colloidal quantum dot light emitting diodes reveal to be promising candidates for the next generation of display technologies. Their brighter emissions, greater color purity, and higher efficiency make them highly desirable in consumer electronics. As such, research into the performance and stability of these novel LEDs are crucial for their operation in displays. These investigations are ongoing, with focused efforts on improving the operating stability through different quantum dot materials and passivation methods. However, less attention is paid in confidently understanding the fundamental relationships between current, voltage, and luminance by which these devices operate. These electrical characteristics reveal insights into the operation of these devices and the behavior of charge carriers. Additionally, temperature-dependent electrical measurements can showcase different behavior at different temperatures and deviations from the expected performance at set temperatures. Temperature dependent processes are revealed and from such, a better understanding of how the device operates is gained. In this thesis, an investigation into the temperature-dependent electrical characteristics of quantum dot light emitting diodes was conducted by measuring the current-voltage-luminance, JVL, relationships at various cryogenic temperatures. These temperatures ranged from 78K, liquid nitrogen boiling point, to 293K, room temperature. This investigation revealed the temperature dependent nature and origin of turn-on voltage, current, EQE, EQE roll-off, and hysteresis.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering</title>
<link href="https://hdl.handle.net/1721.1/164853" rel="alternate"/>
<author>
<name>Rich, Benjamin R.</name>
</author>
<id>https://hdl.handle.net/1721.1/164853</id>
<updated>2026-02-13T03:49:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Uncertainty-Aware Knowledge Graph Retrieval Methods and Their Use in LLM Question-Answering
Rich, Benjamin R.
Knowledge Graph Question Answering (KGQA) encompasses a set of techniques aimed at generating accurate, interpretable responses to natural language queries posed over structured, graph-based datasets. Recent approaches to KGQA involve reducing the knowledge graph (KG) to a relevant subgraph, which is then encoded in natural language as a series of triples (subject, predicate, object) and passed to a large language model (LLM) for interpretation and answer generation. These methods have shown state-of-the-art accuracy. However, this paradigm is undermined by a critical vulnerability: the retrieval of irrelevant or erroneous facts can amplify LLM hallucinations and degrade system trustworthiness, while the reasoning process remains opaque. This thesis addresses this challenge by extending an existing stateof-the-art KGQA architecture with uncertainty-aware subgraph retrieval methods. To achieve this, we modify the retrieval component to learn the epistemic uncertainty of each candidate triple’s relevance to a given query. We implement these modifications using Bayesian methods and learn a well-calibrated approximation of the posterior distribution over triple relevance. By explicitly modeling this uncertainty, the retriever model is shown to provide a fine-grained confidence score for each piece of evidence. We expose these metrics downstream to the LLM during reasoning and evaluate whether LLMs can reason over uncertainty-related metrics to improve KGQA. We find that LLMs cannot reason effectively over uncertainties in most cases, but that agentic workflows that provide selective access to uncertainty metrics may enhance performance. We evaluate our approach against established benchmarks using HIT-rate and set-comparison accuracy metrics. Additionally, we introduce reasoning-path and statistical trust metrics derived from calibrated uncertainty scores. Our analysis reveals a significant positive correlation between path-based uncertainty metrics and the veracity of the Large Language Model’s (LLM) answers. These findings establish a robust foundation for developing uncertainty-grounded trust mechanisms in LLM-agnostic KGQA systems. As a proof of concept, a lightweight classifier trained exclusively on the LLM’s inputs and outputs demonstrates substantial predictive power in identifying correct responses. Finally, we briefly explore using uncertainty to identify out-of-distribution (OOD) queries.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applied Compiler Optimizations for Proving Code</title>
<link href="https://hdl.handle.net/1721.1/164852" rel="alternate"/>
<author>
<name>Ruiz, Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/164852</id>
<updated>2026-02-13T03:49:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Applied Compiler Optimizations for Proving Code
Ruiz, Ricardo
The recent popularity of massively distributed, trustless systems has created a demand for cryptographic proofs: systems to prove that a piece of data is a valid output for a given program. These systems exist, but face very high runtimes for the generation of proofs. Significant effort has been invested in optimizing the prover systems, but relatively less has been focused on optimizing the code that gets read as an input. This paper proposes a new approach to optimizing prover systems by modifying the compiler to produce proof-ready code. It proposes a benchmarking framework for comparing the relative proof costs of RISC-V instructions; the resulting analyis find that shift instructions do not offer heavy savings over multiplication. The finding suggests that strength reduction, a fundamental optimization in modern compilers, can sabotage end-to-end performance. The paper proposes methods for applying this knowledge to better optimize code, leaving the door open for future researchers to continue to make code proofs more performant and accessible.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery</title>
<link href="https://hdl.handle.net/1721.1/164850" rel="alternate"/>
<author>
<name>Xie, Yuxin</name>
</author>
<id>https://hdl.handle.net/1721.1/164850</id>
<updated>2026-02-13T03:49:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reconstructing Cross-Species Ancestral Adeno-Associated&#13;
Viruses for Enhanced Gene Therapy Delivery
Xie, Yuxin
Adeno-associated viruses (AAV) are one of the most promising vectors for gene therapy because of their established safety, low immunogenicity, and capability to achieve sustained gene expression. However, many naturally occurring AAV variants have limitations in their potency, particularly in penetrating biological barriers like the blood-brain barrier (BBB). Additionally, their broad and nonspecific tropism can translate into suboptimal cross-species transduction efficiency and potential toxicity, complicating the clinical transition from animal model to humans. These challenges impede the use of naturally occurring AAVs for therapeutic gene delivery in many neurological disorders-such as autism spectrum disorders (ASD), Parkinson’s disease (PD), Huntington’s disease (HD)—as well as other systemic conditions like cystic fibrosis (CF). To overcome these barriers, we developed a computational framework based on ancestral sequence reconstruction (ASR) to engineer synthetic ancestral AAV capsids with the goal of enhanced targeting specificity and potency. We first validated this computational framework by replicating the previously engineered Anc80L65 capsid. Then, with 75 naturally occurring functional AAV sequences and additional experimentally screened variants exhibiting brain-targeting potency, we built an evolutionary framework. We applied multiple computational methods such as enhanced multiple sequence alignment, maximum-likelihood-based phylogenetic tree inference, and ancestral sequence reconstruction with Bayesian inference. With this methodology, we predicted several novel ancestral AAV capsid sequences at critical evolutionary nodes, particularly those representing functional transitions with potential improved blood-brain barrier penetration and CNS tropism. Our computational framework thus streamlines and accelerates the process of designing ancestral AAV variants with targeted gene therapy applications.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generating Unprecedented Extreme Scenarios with Limited Data</title>
<link href="https://hdl.handle.net/1721.1/164848" rel="alternate"/>
<author>
<name>Chang, Kai</name>
</author>
<id>https://hdl.handle.net/1721.1/164848</id>
<updated>2026-02-13T03:49:19Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Generating Unprecedented Extreme Scenarios with Limited Data
Chang, Kai
Quantifying and predicting rare and extreme events persists as a crucial yet challenging task in understanding complex dynamical systems, ubiquitous in science and engineering. Many practical challenges arise from the infrequency and severity of these events, including the considerable variance of simple sampling methods and the substantial computational cost of high-fidelity numerical simulations. Numerous data-driven methods have recently been developed to tackle these challenges. However, a typical assumption for the success of these methods is the occurrence of multiple extreme events, either within the training dataset or during the sampling process. This leads to accurate models in regions of quiescent events but with high epistemic uncertainty in regions associated with extremes. To overcome this limitation, we introduce the framework of Extreme Event Aware (e2a or eta) or η-learning which does not assume the existence of extreme events in the available data. η-learning reduces the uncertainty even in ‘unchartered’ extreme event regions, by enforcing the extreme event statistics of a few observables during training, which can be available or assumed through qualitative arguments or other forms of analysis. This type of statistical regularization results in models that fit the observed data, but also enforces consistency with the prescribed statistics of some observables, enabling the generation of unprecedented extreme events even when the training data lack extremes therein. Theoretical results based on optimal transport offer a rigorous justification and highlight the optimality of the introduced method. Additionally, extensive numerical experiments illustrate the favorable properties of the ηlearning framework on several prototype problems and real-world precipitation downscaling problems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A*-Decoding: Token-Efficient Inference Scaling</title>
<link href="https://hdl.handle.net/1721.1/164846" rel="alternate"/>
<author>
<name>Chatziveroglou, Ioannis</name>
</author>
<id>https://hdl.handle.net/1721.1/164846</id>
<updated>2026-02-13T03:49:18Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A*-Decoding: Token-Efficient Inference Scaling
Chatziveroglou, Ioannis
Inference-time scaling has emerged as a powerful alternative to parameter scaling for improving language model performance on complex reasoning tasks. While existing methods have shown strong performance gains under fixed compute budgets, there has been little focus on optimally utilizing that budget during inference. In this work, we introduce A*-decoding, a search-based inference-time strategy that builds on the A* search algorithm to optimally utilize a fixed compute budget by prioritizing high-quality reasoning paths during generation. We frame language model decoding as a structured search in a state space of partial solutions, applying the A* transition model to identify promising continuations guided by an external process supervision signal. In our experiments, A*-decoding reaches the performance levels of strong inference scaling baselines like best-of-N and particle filtering while using up to 3x fewer tokens and 30% fewer PRM passes under equivalent compute budgets. On the MATH500 and AIME 2024 benchmarks, A*-decoding enables Llama-3.2-1B-Instruct to match the performance of the 70x larger Llama-3.1-70B-Instruct, and allows Qwen3-1.7B to reach o1-like reasoning accuracy. These results highlight the power of structured search in decoding, offering an alternative to brute-force sampling or scale-driven gains. Our work demonstrates how thoughtful inference-time strategies can enhance reasoning in SLMs, pointing toward future advances in more efficient and scalable language model deployment.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics</title>
<link href="https://hdl.handle.net/1721.1/164845" rel="alternate"/>
<author>
<name>Varma, Vikram</name>
</author>
<id>https://hdl.handle.net/1721.1/164845</id>
<updated>2026-02-13T03:49:27Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">U-Net Network Enhancements to Facilitate Rapid Electron Microscopy Imaging for Connectomics
Varma, Vikram
Imaging the structural and functional connections between cells in the brain allows neuroscientists to understand the brain by studying neuronal wiring diagrams. To automatically segment and classify images that are used in the construction of these neuronal wiring diagrams, or connectomes today, machine learning segmentation techniques require an image scanned with an electron microscope at either a slow dwell time or with small pixel sizes. However, a scalable and more rapid implementation of connectome construction has not yet been realized because of the significant cost of multi-beam electron microscopes and the relatively slow time in which connectomes can be constructed using a single-beam electron microscope. Segmented connectomes include sections that can be segmented properly with a fast scanned image as well as sections that require slow scanning for proper segmentation. Due to this fact, a potential way to enhance the time in which connectomes can be produced and segmented is to first scan samples at fast resolution and perform segmentation using a convolutional neural network, identify those areas of interest that require more detailed imaging through a learning-based error detection network, and then rescan only those identified high interest areas to produce a fused image for segmentation. The proposed thesis will analyze various machine learning methods for segmentation using the U-Net network and review proposed enhancements to the U-Net network that can better utilize electron microscopy images for construction of segmented connectomes. The successful use of fused electron microscopy images will potentially enable higher speed and lower cost electron microscopy imaging for connectomics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications</title>
<link href="https://hdl.handle.net/1721.1/164844" rel="alternate"/>
<author>
<name>Zhang, Erin Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/164844</id>
<updated>2026-02-13T03:49:21Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Low-Temperature Germanium Waveguides for Mid-Infrared Sensing Applications
Zhang, Erin Wei
Waveguide integrated devices that operate in the mid-infrared (mid-IR) wavelength range (2.5-12 µm) are used for sensing the fundamental absorption bands in a variety of molecules. Germanium (Ge) is commonly used for photodetection in the nearinfrared (near-IR) wavelength range of 1.2-1.6 µm due to its strong absorption from a 0.8 eV direct band gap. At longer wavelengths in the mid-IR range, Ge exhibits transparency that makes it a desirable waveguide material for sensing applications. Its epitaxial growth compatibility with silicon (Si) substrates makes Ge-on-Si an effective platform for mid-IR waveguides. For back-end-of-line (BEOL) integration of waveguides in sensing applications, the thermal budget limits the temperature to below 450°C. In this work, we investigated the use of h-line exposure as a commercially viable, low-cost option for patterning low temperature (LT) Ge-on-Si waveguides using direct write lithography. Waveguide dimensions for optimal confinement in single-mode transverse electric (TE) polarization at wavelengths of 3 µm and 10.4- 11.3 µm were modeled and the direct lithography process was refined. Through dose testing and adjustments to the raster direction and pixel resolution, it was found that direct write lithography lacked the resolution required for low-loss waveguides. Scanning electron microscopy (SEM) revealed inconsistent waveguide widths and sidewall roughness, and e-beam lithography was identified as the preferred lithography process. For future integration of LT-Ge in a foundry process design kit (PDK), a universal thickness of 1.7 µm was found to support single-mode waveguide operation from 3-11.3 µm wavelength.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Log-Based Coordination Systems for Managed Cloud Environments</title>
<link href="https://hdl.handle.net/1721.1/164843" rel="alternate"/>
<author>
<name>Jimenez, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/164843</id>
<updated>2026-02-13T03:49:14Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Assessing Log-Based Coordination Systems for Managed Cloud Environments
Jimenez, Gabriel
The distributed systems landscape is undergoing a significant shift toward managed cloud environments, reducing the prevalence of self-hosted coordination services such as ZooKeeper. While ZooKeeper remains a proven and feature-rich solution for coordination tasks, its deployment in cloud environments can introduce component redundancy. This is because the underlying cloud platform already provides internal mechanisms to ensure coordination guarantees. This thesis investigates the design and evaluates the performance of a log-based coordination service library tailored for managed cloud environments. The proposed library removes the ensemble management overhead inherent in ZooKeeper by delegating durability and consistency responsibilities to the cloud provider’s data layer. This architectural simplification enables a modular design, allowing for tailored implementations that exploit the strengths and mitigate the limitations of a system's specified data layer. The library demonstrated feature parity with ZooKeeper for a targeted subset of coordination features, including leader election, membership tracking, and ephemeral state management. The same is noted for migration from an existing ZooKeeper-based application to this work's library, requiring minimal design changes while preserving coordination guarantees. While the results show that this design does not yet match mature coordination services in raw performance, they highlight potential avenues for further research, particularly in optimizing log-based coordination systems for the unique characteristics of cloud-managed data layers. Given the industry’s steady movement toward cloud-native infrastructure, these findings provide a foundation for future exploration into lightweight, platform-integrated coordination solutions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks</title>
<link href="https://hdl.handle.net/1721.1/164841" rel="alternate"/>
<author>
<name>Echezona, Chukwuemekalum</name>
</author>
<id>https://hdl.handle.net/1721.1/164841</id>
<updated>2026-02-13T03:49:20Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Proof-of-Work Based Mitigation of Real Time Video DDoS Attacks
Echezona, Chukwuemekalum
As the Internet continues to grow in size and complexity, Distributed Denial of Service (DDoS) attacks grow in size and complexity alongside it. One particularly common form of DDoS attack is the TCP SYN flood, which exploits the TCP handshake process to exhaust server resources. This thesis investigates the use of a novel proof-of-work (PoW) based mitigation method to respond to such attacks, specifically in the context of WebRTC video conferencing applications. PoW aims to shift the computational burden from the server to the client, by utilizing a hard to solve puzzle that is easily verifiable. Guided by the same evaluation framework used by the original contributors, we conducted controlled experiments using SPHERE, a national research testbed, and the open-source Jitsi Meet video conference application to simulate DDoS attacks and measure their impact on video quality metrics such as upload/download bitrate and video framerate. Our experiments involved multiple scenarios with and without active attacks and PoW mitigation activate. Results demonstrate that PoW imposes minimal overhead on legitimate clients while maintaining high efficacy when faced with the threat of a SYN Flood attack, regardless of whether the attackers do the proof-of-work before sending traffic. These findings highlight PoW as a promising low overhead mitigation method for WebRTC conference systems under the threat of DDoS attacks.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding</title>
<link href="https://hdl.handle.net/1721.1/164840" rel="alternate"/>
<author>
<name>Huang, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/164840</id>
<updated>2026-02-13T03:49:13Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Priority-Based Search for Lifelong Multi-Agent Path Finding
Huang, Natalie
The lifelong Multi-Agent Path Finding (MAPF) problem requires planning collision-free trajectories for agents operating continuously in dynamic environments. Traditional solvers such as Priority-Based Search (PBS) use fixed branching heuristics, which can be inefficient in high-congestion scenarios. This work explores how learning-based methods can improve PBS decision-making. We develop supervised learning (SL) policies trained from high-quality beam search trajectories and reinforcement learning (RL) policies learned directly through simulation, enabling adaptive branching strategies. Evaluations on warehouse-style and Kiva-style maps with varying agent densities show that learned policies can significantly boost throughput in congested warehouse layouts, while identifying scenarios where classical heuristics remain competitive. Our findings provide guidance on solver selection based on environment layout and congestion characteristics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Interpret Language Model Diffs</title>
<link href="https://hdl.handle.net/1721.1/164839" rel="alternate"/>
<author>
<name>Goel, Avichal</name>
</author>
<id>https://hdl.handle.net/1721.1/164839</id>
<updated>2026-02-13T03:49:23Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Learning to Interpret Language Model Diffs
Goel, Avichal
Finetuning-induced changes to a model’s weights (a “model diff”) are semantically meaningful but often difficult to interpret. This makes us wonder: can we describe the content of an unknown model diff using natural language? We introduce diff interpretation training, a method that teaches a model describe its own finetuning-induced modifications. Our approach uses synthetic model diffs to train a lightweight adapter, which in turn can be applied to a compatible finetuned model to make it self-describing. Using two simple task settings, we demonstrate that our method can successfully decode model diffs into accurate natural language descriptions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods</title>
<link href="https://hdl.handle.net/1721.1/164837" rel="alternate"/>
<author>
<name>Botto Tornielli, Marcos Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/164837</id>
<updated>2026-02-13T03:49:17Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Approximate L² Error Control by Solution Post-Processing for Finite Element Solutions of PDEs with Higher-Order Adaptive Methods
Botto Tornielli, Marcos Julian
With the substantial computing resources available today, computational fluid dynamics simulations allow scientists and engineers to simulate physical problems very accurately. However, achieving this accuracy requires a sufficiently refined computational mesh, which is a primary driver for the high cost of complex simulations. Mesh adaptation methods provide an automated way to determine the regions where a mesh needs the most refinement and generate a new mesh that efficiently targets these regions. In this thesis, we build on previous work in a posteriori error estimation and mesh adaptation for finite element methods to propose a new mesh adaptation method based on L² error control by solution post-processing. A key feature of our method is its natural extension to higher-order discretizations while providing a problem-independent adaptation methodology. Problem-independent adaptation methods do not depend on specific information about the partial differential equation (PDE) problem being solved, and can therefore be applied to a wide range of problems without modification. We present numerical results applying the approximate L² error control method to a two-dimensional advection-diffusion problem with anisotropic features. These results demonstrate the proposed method’s ability to generate well-adapted anisotropic meshes for solutions with polynomial orders 1, 2, and 3. We also apply the approximate L² error control method to a more complex two-dimensional Reynolds-Averaged Navier-Stokes problem with turbulent flow over a flat plate. We compare the convergence of the drag coefficient and the characteristics of adapted meshes obtained with the proposed method and with an output-based adaptation approach. As expected, the approximate L² error control method is not as effective as the output-based approach in reaching a converged drag coefficient value, but it nevertheless demonstrates the ability to effectively control the approximate L² error in the Mach field.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors</title>
<link href="https://hdl.handle.net/1721.1/164836" rel="alternate"/>
<author>
<name>Murphy, Devin</name>
</author>
<id>https://hdl.handle.net/1721.1/164836</id>
<updated>2026-02-13T03:49:25Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Advancing Ubiquitous Tactile Sensing through&#13;
Comprehensive Tooling for Resistive Matrix-Based&#13;
Sensors
Murphy, Devin
Resistive matrix-based tactile sensors offer a scalable and intuitive approach to capturing human-environment interactions, yet deploying them in real-world systems remains challenging because they must remain portable, adaptive, and long-lasting. This thesis presents the WiReSens Toolkit, an open-source hardware and software platform for developing resistive tactile sensing systems that meet the demands of real world applications. The toolkit features adaptive hardware for interfacing with resistive sensors and a web-based GUI that mediates access to otherwise complex functionality, including 1) multi-device programming and wireless visualization across three distinct communication protocols 2) autocalibration methods for adaptive sensitivity and 3) intermittent data transmission for low-power operation. As a use case for the toolkit, the thesis then introduces a method for the automatic design and fabrication of custom tactile sensing gloves using flexible printed circuit boards (FPCBs), enabling rapid, scalable production. Together, these contributions lower barriers to adoption and support broader exploration of tactile sensing in HCI, robotics, and ubiquitous computing.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantization Methods for Matrix Multiplication and Efficient Transformers</title>
<link href="https://hdl.handle.net/1721.1/164834" rel="alternate"/>
<author>
<name>Savkin, Semyon</name>
</author>
<id>https://hdl.handle.net/1721.1/164834</id>
<updated>2026-02-13T03:49:20Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Quantization Methods for Matrix Multiplication and Efficient Transformers
Savkin, Semyon
We study quantization in Machine Learning. First, we introduce NestQuant — a technique for quantization of matrix products and post-training quantization of LLMs. Beyond reducing the memory footprint, quantization accelerates inference, as the primary bottleneck during autoregressive generation is often the memory bandwidth. NestQuant leverages two nested lattices to construct an efficient vector codebook for quantization, along with practical encoding and decoding algorithms. The approach is grounded in recent theoretical work that characterizes the optimal rate–distortion trade-off for matrix products. Empirically, on Llama-3-8B, it reduces the perplexity gap between full-precision and quantized models by more than 55% relative to the current state-of-the-art technique (SpinQuant). Second, we investigate data-domain quantization for RF signals. We propose a tokenized transformer for source separation that discretizes RF waveforms into learned tokens and operates directly on the resulting sequences, outperforming strong convolutional baselines. Together, these contributions connect information-theoretic limits with deployable systems: structured vector quantizers accelerate LLM inference and enable competitive discrete representations for RF tasks.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors</title>
<link href="https://hdl.handle.net/1721.1/164833" rel="alternate"/>
<author>
<name>Chun, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/164833</id>
<updated>2026-02-13T03:49:15Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Improving Data-Driven Contact Localization and Force Estimation for Barometric Tactile Sensors
Chun, Ethan
Barometric tactile sensors offer a cheap, robust, and customizable means for robots to perceive the world. Central to their operation are models that extract useful information from the sensors’ raw pressure readings. In this work, I focus on improving data-driven methods for single-point contact localization and force estimation using a previously presented three-quarter sphere barometric tactile sensor. To allow modeling of time-dependent effects in the sensor material, I introduce a multi-threaded data collection system that captures ground truth contact and sensor data at exactly 100 Hz. I construct both feed-forward and recurrent networks using this data, finding that a recurrent network achieves a 15% lower mean absolute error for angular contact localization on the sphere compared to prior methods. The recurrent architecture’s computational efficiency ensures that the architecture can still run within the constraints of the sensors’ microcontroller. Despite this improvement, I find that more expressive models such as LSTMs tend to overfit on the collected data and physical phenomena observed during deployment were not well represented by the training metrics. To better understand the extent that these data-driven methods alone can improve sensor performance, I shift focus away from the modeling and analyze the physical sensor instead. I find that viscous effects in the sensor can render the prediction task unlearnable without historical data and that thermal effects introduce a train-test distribution shift. Finally, I discuss design criteria for a theoretical future barometric tactile sensor that may mitigate the effects found during my modeling and analysis.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets</title>
<link href="https://hdl.handle.net/1721.1/164832" rel="alternate"/>
<author>
<name>Rojas Collins, Elias G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164832</id>
<updated>2026-02-13T03:49:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Probabilistic Programming over Heterogeneous Language&#13;
and Hardware Targets
Rojas Collins, Elias G.
Modern probabilistic programming applications, from large-scale Bayesian inference to real-time decision making, require both the expressiveness of CPU-oriented languages such as Gen.jl and the massive parallelism of GPU-backed array languages such as GenJAX, yet existing platforms force users to trade modeling flexibility for performance. This thesis introduces GenUflect, a metalanguage that embeds multiple Gen-compatible dialects inside a single program, allowing each sub-component to run on the most appropriate language and hardware target while preserving Gen’s programmable-inference interface. GenUflect extends Gen’s dynamic-modeling language with the @union, @vmap, @amortize, @amortize≤, and @runtime_union combinators; these macros compile at build-time (or justin-time) to autonomous generative functions written in the target dialect, link them through a lightweight FFI layer, and manage cross-device data via zero-copy MirrorArrays and lazily materialized traces. The resulting programs remain sound by construction because each foreign subtrace is itself a valid Gen generative function. Empirical studies demonstrate that this hybrid approach yields large practical gains. On a split linear-vs-sinusoidal regression task, GenUflect matches pure GenJAX throughput while running higher-order control logic on the CPU, and is up to two orders of magnitude faster than a pure Gen implementation for datasets of 105 points. In a collapsed-Gibbs sampler for a Dirichlet-process mixture model, GenUflect’s elastic allocation (@amortize≤) lets vectorized GPU kernels adapt to a growing number of clusters; the same inference that takes over an hour in Gen executes in seconds with GenUflect. A probabilistic inverse-graphics pipeline further showcases how heterogeneous submodels can cooperate seamlessly within unified inference code. By coupling language interoperability with automated data movement and compile-time code generation, GenUflect bridges the gap between flexibility and speed, enabling scalable, expressive probabilistic programs that natively exploit both CPUs and accelerators.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Under-Coverage of Double Machine Learning Due to Implementation Choices</title>
<link href="https://hdl.handle.net/1721.1/164831" rel="alternate"/>
<author>
<name>Siegmann, Charlotte B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164831</id>
<updated>2026-02-13T03:49:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Under-Coverage of Double Machine Learning Due to Implementation Choices
Siegmann, Charlotte B.
Double ML estimators can estimate coefficients of interest with far fewer functional form assumptions than linear econometric methods. However, DML requires researchers to make a range of implementation choices, including the selection of the function class, the random seed, and hyperparameter configurations. While asymptotic theory suggests these choices should not affect final estimates, we show that for 10 economic analyses (8 of them published and peer-reviewed), implementation choices affect the results. In half of the datasets, different implementation choices even change the interpretation of findings between negative, null, or positive effects. We link these results to a framework for empirically assessing the performance of machine-learning-based estimators, focusing on precision, coverage, and susceptibility to manipulation. This is meant to complement asymptotic theory. We demonstrate that the coverage of DML confidence intervals is too low—placing an upper bound of 48% on the expected coverage of conventional 95% confidence intervals for published DML economics papers. We show that in the status quo, the susceptibility of DML to manipulation by researchers is high, but propose ways to mitigate this susceptibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm</title>
<link href="https://hdl.handle.net/1721.1/164830" rel="alternate"/>
<author>
<name>Zhu, Qianyu Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/164830</id>
<updated>2026-02-13T03:49:05Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Irreversible Perturbations of the Unadjusted Langevin Algorithm
Zhu, Qianyu Julie
A central task in Bayesian inference and scientific computing is to compute expectations with respect to probability distributions that are only known up to a normalizing constant. Markov chain Monte Carlo (MCMC) methods, and in particular Langevin dynamics, provide a powerful framework for this task by constructing stochastic processes that converge to the target distribution. However, practical implementations face two challenges: slow mixing when the target distribution is anisotropic or multimodal, and persistent discretization bias introduced by numerical schemes. This thesis investigates irreversible perturbations of overdamped Langevin dynamics, aiming to accelerate mixing while controlling discretization error. Irreversible perturbations introduce skew-symmetric drift terms that preserve the target distribution while inducing rotational flow, thereby enhancing exploration. Although prior work has established their benefits in continuous-time settings, the impact of discretization and the design of optimal perturbations for discrete-time algorithms remain open problems. We develop a framework for optimizing constant (position-independent) irreversible perturbations in the Unadjusted Langevin Algorithm (ULA). Our approach balances two competing objectives: maximizing the spectral gap of the continuous dynamics to accelerate convergence, and minimizing discretization error that drives estimation bias. Motivated by this, we introduce new criteria that jointly evaluate bias and efficiency, and we show how these criteria identify perturbations that improve performance beyond existing constructions. Theoretical analysis is complemented by numerical experiments on Gaussian and nonGaussian targets. These experiments demonstrate that appropriately designed irreversible perturbations can reduce mean-squared error without sacrificing stability, while poorly chosen perturbations can degrade performance. The results highlight the importance of geometry-aware design and motivate systematic optimization strategies for irreversible perturbations. Overall, this work extends the theoretical and practical understanding of irreversible Langevin dynamics, bridging the gap between continuous-time spectral analysis and discrete-time numerical performance. It provides principled tools for constructing efficient MCMC samplers, with potential applications in high-dimensional Bayesian inference and modern machine learning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Camera Motion Compensated Viewpoint Shift</title>
<link href="https://hdl.handle.net/1721.1/164829" rel="alternate"/>
<author>
<name>Snowdon, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/164829</id>
<updated>2026-02-13T03:49:03Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Single Camera Motion Compensated Viewpoint Shift
Snowdon, Adam
Eye contact is a necessary tool for human connection and in most video conferencing situations, eye contact is not possible. Standard laptop and webcam configurations position the camera at the top of the screen, meaning that when the user looks at other people’s faces in the center of the screen, the camera captures the user looking downward, creating the impression of poor eye contact for remote participants. Solutions involving 3D modeling of the face to synthesize a gaze-corrected view have been explored and exist but are too computationally costly for most personal computers. To address this computational challenge, we draw inspiration from 2D frame interpolation techniques to synthesize a virtual camera view that repositions the user’s apparent gaze toward the camera. Our method uses a single camera located at the top of the user’s screen and requires only a brief setup period. Assuming there is only one user, our approach creates a virtual camera view that transforms the user’s viewpoint from the screen center to the camera position, enabling more realistic eye contact in video conference calls.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data</title>
<link href="https://hdl.handle.net/1721.1/164828" rel="alternate"/>
<author>
<name>Pan, Jessica N.</name>
</author>
<id>https://hdl.handle.net/1721.1/164828</id>
<updated>2026-02-13T03:48:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Framework for 3D Mouse Brain Reconstruction: RNA-based Stitching of Adjacent Tissue Slices and Co-Registration of Multimodal Imaging Data
Pan, Jessica N.
Mapping the brain’s complex neural networks requires tracing the long-distance pathways of individual axons, a task that demands a comprehensive 3D reconstruction of the brain. Recently, spatially resolved transcriptomics (SRT) methods enable the study of gene expression and biomolecule distribution in each neuron in its spatial context, opening the door to more thoroughly investigating cell-cell interactions between neurons. However, SRT methods are limited to slices of tissue; therefore, computational alignment is essential to reconstruct a cohesive 3D volume while correcting for both batch effects and inherent sample variability. This thesis presents a novel framework that addresses these challenges through three primary contributions. First, a memory-efficient, non-referenced-based algorithm was developed to align the superficial surfaces of adjacent, high-resolution tissue slices. Second, these surface transformations were interpolated through the tissue slices on a proof-of-concept dataset of three adjacent slices. Third, methods for co-transforming fluorescent protein imaging data were explored to fully resolve the cell boundaries between neurons. These three methods are necessary steps towards creating a fully-resolved, multimodal 3D model of the brain.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators</title>
<link href="https://hdl.handle.net/1721.1/164827" rel="alternate"/>
<author>
<name>Garg, Shruti</name>
</author>
<id>https://hdl.handle.net/1721.1/164827</id>
<updated>2026-02-13T03:49:11Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Non-Convex Objectives to Plan More Optimal&#13;
Motion for Manipulators
Garg, Shruti
Non-convex optimization is essential to tackle increasingly complex and practical problems in kinematic motion planning. Although introducing non-convexity often sacrifices guarantees of feasibility and optimality–making solutions more susceptible to local minima or failure to converge–many robotic systems and tasks are non-convex by nature, necessitating at least somewhat non-convex formulations. In this thesis, we aim to mostly constrain non-convexity to the objective. This optimization structure helps preserve certain feasibility guarantees in theory and usability in practice while enhancing optimality of solutions, even if global optimality is not achieved. In the first chapter, we demonstrate the effectiveness of non-convex objectives in scenarios where motion planning involves a non-convex parameterization of the configuration space. We keep constraints strictly convex, with the non-convexity quarantined to the objective. This structure guarantees a feasible solution given a feasible initial guess. We primarily use our method to post-process Graphs of Convex Sets solutions in three domains: constrained bimanual motion, motion with guaranteed non-collision, and planning in SO(3). In each case, the non-convex objective compensates for distortion introduced by the parameterization, resulting in more efficient and natural motion. In the second chapter, we propose teleoperation scheme with full-body motion planning for non-holonomic mobile manipulators. Our key contribution is a Differential Inverse Kinematics (DiffIK) formulation that crafts non-convex objectives to avoid singularities and joint limits leading to more robust feasible motion. Unlike before, the constraints are not strictly convex, so the optimization has no guarantees of feasibility. However, we mitigate the non-convexity in the constraints as much as we can by linearizing around the robot’s current position and approximating the highly non-convex non-holonomic constraint. We explore multiple formulations for singularity avoidance and empirically demonstrate that integrating these objectives into DiffIK improves motion quality for teleoperation for the RBY-1 robot.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation</title>
<link href="https://hdl.handle.net/1721.1/164826" rel="alternate"/>
<author>
<name>Pai, Sameer</name>
</author>
<id>https://hdl.handle.net/1721.1/164826</id>
<updated>2026-02-13T03:49:02Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">CableSplat: Optimized 3D Gaussian Splatting for 1D Deformable Pose Estimation
Pai, Sameer
A key challenge in the robotic manipulation of deformable objects is the lack of accurate and efficient systems for estimating their pose in real-time, especially in the presence of occlusion. In this thesis we propose CableSplat, a novel non-parametric method leveraging 3D Gaussian Splatting to estimate the pose of a linear deformable object given RGB images of the object from multiple viewpoints. To facilitate the evaluation of the performance of this method, we develop both simulated and real-world pipelines to collect calibrated and segmented recordings of cables undergoing various manipulations and transformations. We find that our method is consistently able to estimate cable pose to within an average error of ∼2.5mm across simulated tasks. Furthermore, performance on a scene reconstruction metric drops only slightly between simulated and real-world data, suggesting high-fidelity state estimation even in the real world. CableSplat is therefore a promising candidate for the extension of existing manipulation systems to deformables.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease</title>
<link href="https://hdl.handle.net/1721.1/164825" rel="alternate"/>
<author>
<name>Guo, Sophie J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164825</id>
<updated>2026-02-13T03:49:10Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">scPhen: Single-Cell Phenotype Predictor for Alzheimer’s&#13;
Disease
Guo, Sophie J.
Advances in artificial intelligence (AI) and generative AI for representation learning have transformed our ability to model complex biological systems. Single-cell RNA sequencing (scRNA-seq) provides unprecedented resolution into cellular heterogeneity, offering a powerful substrate for modeling disease circuitry. However, predicting patient-level phenotypes from scRNA-seq remains challenging due to limited sample sizes, variable cell counts, and the computational burden of modeling long-context dependencies. We present scPhen, a flexible, parametric deep-learning framework for phenotype prediction from single-cell transcriptomic data, applied here to Alzheimer’s disease (AD) as a paradigm of complex, heterogeneous pathology. scPhen consists of a cell embedding module and a patient embedding module, designed to capture both fine-grained molecular patterns and higher-order cell–cell relationships. The framework supports multiple architectural backbones, including Transformers, Graph Neural Networks (GNNs), and state-space models such as Mamba, Mamba2, and BiMamba2, allowing exploration of tunable components for optimized performance. Across classification and regression tasks, state-space models, and in particular BiMamba2, demonstrated superior predictive accuracy and computational efficiency compared to Transformer-based and hybrid approaches. We further integrated attention-based multiple instance learning to enable variable cell counts per patient and to prioritize phenotype-informative cellular subsets. Interpretability analyses using Integrated Gradients and cell-level attention scores revealed gene programs and cell populations associated with AD progression, highlighting known neuroinflammatory signatures and suggesting novel molecular targets. By unifying cutting-edge sequence modeling architectures with scalable single-cell analysis, scPhen provides a generalizable, high-resolution approach to phenotype prediction. While demonstrated here in AD, this framework is readily extensible to other complex diseases and multi-modal cellular datasets, bridging computational innovation and biological discovery.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Task Functional Localizers Using Naturalistic fMRI</title>
<link href="https://hdl.handle.net/1721.1/164824" rel="alternate"/>
<author>
<name>Wilke, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/164824</id>
<updated>2026-02-13T03:49:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Predicting Task Functional Localizers Using Naturalistic fMRI
Wilke, Jordan
Functional magnetic resonance imaging (fMRI) data collected during naturalistic stimuli has shown promise for predicting individual traits, biomarkers of disease and functional brain localizations, potentially offering advantages over traditional resting-state approaches. This study investigated the use of interpretable deep learning models to predict demographics and functional task localizer activations from fMRI time-series data collected while participants viewed naturalistic stimuli. Using the data of 143 subjects from the Human Connectome Project, I analyzed 7T fMRI scans from participants watching movies to predict sex, age, and functional localizer activations across multiple cognitive tasks. I employed state-of-the-art machine learning architectures, including DICE and Glacier models, specifically chosen for their interpretable design features that build directed connectivity matrices and produce weighted temporal attention maps. These models aimed to capture dynamic brain activity patterns while maintaining the ability to understand which temporal features drive predictions. The results successfully reproduced previous findings for sex classification but showed poor performance for age prediction, with correlations ranging from -0.175 to 0.243. For functional localizer predictions, models initially appeared to achieve high performance with some specific contrasts having correlations around 0.9 and Dice scores generally above 0.6. However, detailed analysis revealed that these models were primarily predicting group averages rather than learning meaningful inter-subject variability, as evidenced by chance-level subject identification accuracy. This finding contrasts with previous works that demonstrated successful prediction of individual differences in functional localizations. The failure to capture inter-subject variability represents a significant limitation, as individual differences in functional regions of interest are crucial for applications such as pre-surgical mapping and disease prediction. My findings suggest that predicting from raw fMRI time-series may require different approaches than those used here, with preprocessed functional connectivity matrices showing promising results, and highlight the importance of sufficient training data to separate signal from noise when learning directly from naturalistic stimuli. Despite these challenges, this work establishes important methodological foundations and identifies key limitations that must be addressed in future research combining naturalistic stimuli with machine learning for fMRI prediction tasks. The findings emphasize the need for models that can capture individual functional differences while maintaining the interpretability necessary for understanding how naturalistic stimuli drive brain-based predictions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference</title>
<link href="https://hdl.handle.net/1721.1/164823" rel="alternate"/>
<author>
<name>Chung, Karen</name>
</author>
<id>https://hdl.handle.net/1721.1/164823</id>
<updated>2026-02-13T03:49:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Probabilistic Programming with Low-Level, High-Performance GPU Programmable Inference
Chung, Karen
GPU-compatible probabilistic programming languages (PPLs) have enabled high-performance, data-parallel programmable inference. However, these systems face fundamental trade-offs between expressiveness and performance, as their GPU code generation is automated and black-boxed, limiting optimization opportunities and imposing restrictions on program expressivity. This thesis introduces GenCUDA, a probabilistic programming system that addresses this limitation by embedding the CUDA GPU programming language directly into a C++/CUDA frontend, enabling GPU programmable inference with fine-grained control over runtime and memory profiles. GenCUDA extends the Gen probabilistic programming architecture by providing a dynamic modeling language (DML) that allows users to write performance-critical sections of generative functions as CUDA kernels while maintaining automatic trace management and the generative function interface (GFI). The system supports both sequential and parallel execution contexts through specialized effect handlers that seamlessly compose CPU and GPU code paths. Key technical contributions include: (1) a high-performance GPU distributions library achieving 10-100× speedups over TensorFlow-Probability, (2) memory-efficient trace management via template-optimized parallel effect handlers, and (3) vectorized generative functions that enable massive parallelization of inference algorithms. We demonstrate GenCUDA’s capabilities through comprehensive benchmarks on inference algorithms applied to diverse models including factor graphs, mixture models, and Hidden Markov Models. Results show significant performance improvements over JAX-based implementations: up to 3× speedup for importance sampling on a hierarchical model, 5.7× speedup for parallel Gibbs sampling on factor graphs, and memory efficiency improvements for large-scale mixture models supporting up to 6× as many clusters compared to existing frameworks’ limits. The system maintains the composability and expressiveness of probabilistic programming while unlocking GPU performance optimization techniques such as kernel fusion and memory hierarchy exploitation that are inaccessible to higher-level frameworks. GenCUDA demonstrates that embedding low-level GPU programming within automated probabilistic inference workflows can achieve both performance gains and algorithmic expressivity without sacrificing the modularity of probabilistic programming paradigms.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming</title>
<link href="https://hdl.handle.net/1721.1/164822" rel="alternate"/>
<author>
<name>Kotak, Mit</name>
</author>
<id>https://hdl.handle.net/1721.1/164822</id>
<updated>2026-02-13T03:49:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Simplifying Equivariant GPU Kernels through Tile-based&#13;
Programming
Kotak, Mit
E(3)-equivariant neural networks have demonstrated success across a wide range of 3D modeling tasks. Until recently, they were bottlenecked due to their high memory and wall-time requirements. In this thesis we first provide an overview of recent GPU kernel efforts by both academia and industry that address this issue. These approaches tradeoff performance for engineering complexity, while still being algorithmically bottlenecked at 10 % GPU utilization. We instead trade off engineering complexity for performance. This not only lowers the barrier to GPU programming but also builds an abstraction layer to reason about future algorithmic innovations that can improve GPU utilization. Our kernel &#119861;3, based on the tiling- optimizations in just 100 lines of PyTorch-like code. We explore the performance-simplicity tradeoff with two case studies and demonstrate the practicality of our kernel workflow through downstream integration with a production model. We hope this work serves as inspiration to broaden and deepen existing equivariant kernel efforts.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From coarse fate choice to precise pattern: post-mitotic progenitor targeting</title>
<link href="https://hdl.handle.net/1721.1/164820" rel="alternate"/>
<author>
<name>Nie, Mel F.</name>
</author>
<id>https://hdl.handle.net/1721.1/164820</id>
<updated>2026-02-13T03:49:07Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From coarse fate choice to precise pattern: post-mitotic progenitor targeting
Nie, Mel F.
Planarians possess remarkable regenerative abilities, driven by pluripotent stem cells called neoblasts. While neoblasts are known to give rise to progenitor cells that form various tissues, whether and the extent to which these progenitors migrate across the animal remains unclear. Irradiation experiments eliminate all neoblasts outside shielded areas, allowing for the visualization of cell migration from the remaining neoblasts, but irradiated animals may not reflect homeostatic progenitor migration patterns. To address this, 5-ethynyl-2’-deoxyuridine (EdU) labeling and plug transplant techniques were used to trace progenitor movement in non-irradiated planarians. Using whole-mount fluorescence in situ hybridization (FISH) and the quantification of EdU-labeled cells, this study demonstrates that progenitor cells are capable of migrating long distances and exhibit a pronounced anterior bias in their movement and integration.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection</title>
<link href="https://hdl.handle.net/1721.1/164818" rel="alternate"/>
<author>
<name>Wagh, Rohan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164818</id>
<updated>2026-02-13T03:49:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Development of Ensemble Strategies for Generalization in&#13;
Deepfake Image Detection
Wagh, Rohan M.
The growing accessibility of generative models has enabled the rapid proliferation of deepfake content, posing significant challenges in image-based biometric security and media authenticity. In this thesis, six diverse facial deepfake image datasets are assembled, and four modern detection models are evaluated in a cross-domain scenario. We observe that individual models fail to generalize to images generated by techniques outside the scope of their training data. This often hinders the applicability of a single model in real-world deepfake detection. This thesis proposes ensemble strategies as a means of addressing this lack of generalization. We find that the ensemble models outperform individual models in classifying deepfake images, particularly in terms of accuracy and recall. An exhaustive evaluation of combinations of models shows that ensembles of similar models provide limited benefit, whereas ensembles of complementary models lead to significant improvements in classification performance. Ensembling models based specifically on accuracy and recall metrics also produces models that lower the rate of more harmful false negative predictions. This work highlights the value of ensemble models in improving generalization across diverse image families and provides a framework for building robustness in real-world deepfake detection systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests</title>
<link href="https://hdl.handle.net/1721.1/164700" rel="alternate"/>
<author>
<name>Tan, Lip-Bu.</name>
</author>
<id>https://hdl.handle.net/1721.1/164700</id>
<updated>2026-02-03T04:58:28Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">MEKIN numerical modeling and simulation of the SPERT-III E-core transient tests
Tan, Lip-Bu.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation</title>
<link href="https://hdl.handle.net/1721.1/164696" rel="alternate"/>
<author>
<name>Smith, Mathew D.
            (Mathew Darin)</name>
</author>
<id>https://hdl.handle.net/1721.1/164696</id>
<updated>2026-02-03T04:58:24Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Evaluation of recirculating well technology with a cost comparison to pump and treat technology for containment of the CS-10 contaminant plume at the Massachusetts Military Reservation
Smith, Mathew D.
            (Mathew Darin)
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 1997; Includes bibliographical references (leaves 43-45).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The crystallization of sucrose</title>
<link href="https://hdl.handle.net/1721.1/164695" rel="alternate"/>
<author>
<name>Brown, Ernest K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164695</id>
<updated>2026-02-03T04:58:17Z</updated>
<published>1929-01-01T00:00:00Z</published>
<summary type="text">The crystallization of sucrose
Brown, Ernest K.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1929; Includes bibliographical references (leaf 81).
</summary>
<dc:date>1929-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems of Visualization for Musical Futures</title>
<link href="https://hdl.handle.net/1721.1/164673" rel="alternate"/>
<author>
<name>Naseck, Perry</name>
</author>
<id>https://hdl.handle.net/1721.1/164673</id>
<updated>2026-01-30T03:24:59Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Systems of Visualization for Musical Futures
Naseck, Perry
This thesis investigates how large-scale visual systems can communicate the presence, agency, and foresight of improvising musical agents–human and AI–during live performance. We propose a framework for manifesting AI collaborators on stage through five principles: musical transparency, live improvisational reactivity, demonstrated virtuosity, communication for collaboration, and visual fit. Two public performances operationalize these ideas: an addressable-light sculpture that renders harmonic space, and a stage-sized kinetic sculpture built from novel, low-cost Generic Pan Tilt fixtures that visualize the AI’s planned “musical futures.” The latter combines a real-time, MIDI-conditioned, Transformer-based hand-motion model with deterministic, pattern-based mappings that signal states such as resting and regeneration. Audience surveys indicate that viewers perceived links between musical turns and kinetic gestures while requesting clearer explanatory cues. We document the open-source hardware, firmware, and control protocols of the Generic Pan Tilt platform and reflect on design tradeoffs for accessibility, reliability, and expressivity. Finally, we outline a real-time analysis toolchain–motif detection, parallelism, and continuous energy/tension estimators–that emits OSC triggers for lighting, media, kinetic, and spatial-audio systems, enabling reactive shows beyond timecode. Together, these systems advance performable visualizations of human-improvised and AI-driven musical futures.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Rules for LLM-Generated Code: A RealWorld Case Study</title>
<link href="https://hdl.handle.net/1721.1/164672" rel="alternate"/>
<author>
<name>Lawrence, Jennifer M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164672</id>
<updated>2026-01-30T03:24:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Design Rules for LLM-Generated Code: A RealWorld Case Study
Lawrence, Jennifer M.
This thesis conducts a case study exploring the interaction between software design, extensibility, and LLM code generation. The central problem we investigate is whether LLMs violate software design principles in ways that introduce bugs and ultimately hinder extensibility. We examine several repositories belonging to the RealWorld collection, a project that demonstrates combinations of frameworks, database, and programming languages for building full stack web apps modeled on an existing social media application. We create a concept-based implementation of the RealWorld API. Concept Design defines software systems in terms of the abstract purposes and relationships of self-contained units of functionality. It enforces stringent design standards and aims to aid humans better understand complex software behavior. To test code extensibility, we develop three phases of new functionality to be added to the RealWorld API. Each phase is intended to mimic real-world software development, adding functionality that is commonly found in social media platforms while increasing nuance and complexity. The code for these extensions is generated by an AI agent, then reviewed by a human coder who classifies and fixes any bugs. In this study, we examine how LLMs interact with software paradigms like Concept Design, the kinds of design violations they produce, and whether these violations correlate with bugs that impede extensibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cognify: An On-Device, AI-powered Learning Assistant</title>
<link href="https://hdl.handle.net/1721.1/164671" rel="alternate"/>
<author>
<name>Huang, Siyong</name>
</author>
<id>https://hdl.handle.net/1721.1/164671</id>
<updated>2026-01-30T03:24:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Cognify: An On-Device, AI-powered Learning Assistant
Huang, Siyong
Large Language Models (LLMs) have proven highly effective for a wide range of natural language processing tasks, but their size and compute requirements often restrict their use to powerful cloud-based infrastructures. In recent years, significant progress has been made in shrinking LLMs while maintaining performance levels comparable to much larger models. We are approaching the point where the capabilities of massive, multi-billion parameter models can be realistically replicated on consumer-grade devices. This thesis builds upon that foundation by developing an AI-powered note-taking application that runs entirely offline, using only the compute resources available on a personal laptop. The application is designed to listen to lectures alongside the student and provide support in real-time—through transcription, notes generation, and enabling context-aware search. Achieving this level of interactivity locally introduces challenges in reducing end-to-end latency, which this project addresses through both model-level optimizations and the design of efficient prompting and inference algorithms. A demo of the app can be found on Youtube.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Analysis of the Apple AMX Matrix Accelerator</title>
<link href="https://hdl.handle.net/1721.1/164670" rel="alternate"/>
<author>
<name>Zhou, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164670</id>
<updated>2026-01-30T03:24:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Performance Analysis of the Apple AMX Matrix Accelerator
Zhou, Jonathan
Apple Silicon integrates a dedicated Apple Matrix Coprocessor (AMX) that executes outer-product style computations with high throughput, but its public programming model remains largely hidden behind the Accelerate framework. This thesis turns AMX into a more predictable and practical target by combining (i) empirical throughput characterization, (ii) a case study on AMX specific matrix multiplication (GEMM) design, and (iii) an interpretable rule-based latency model that predicts cycle counts for short AMX instruction sequences. First, microbenchmarks quantify AMX load/store and compute limits across matrix and vector modes and data types. We analyze throughput in both GFLOPS and AMX instructions per cycle, and also observe output register based throughput limitations. Second, we develop an in-place GEMM that uses masked outer products and strategically overlapping tiles to avoid scratch buffers used by Accelerate, outperforming Accelerate while preserving simplicity. Third, we introduce a compact latency model that decomposes cycles into per-instruction BaseTime, symmetric SwitchLatency for instruction changes, and instruction FullLatency (data dependency) terms. Fitted with non-negative coordinate descent on length-2 loops and validated on length-3 sequences via a lightweight loop simulation, the model obtains reasonably high accuracy while remaining helpful for those trying to understand the architecture.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Large Language Models from a Data SystemsPerspective</title>
<link href="https://hdl.handle.net/1721.1/164667" rel="alternate"/>
<author>
<name>Chen, Peter Baile</name>
</author>
<id>https://hdl.handle.net/1721.1/164667</id>
<updated>2026-01-30T03:24:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Optimizing Large Language Models from a Data SystemsPerspective
Chen, Peter Baile
Strong retrieval and reasoning capabilities are essential for large language models (LLMs) to effectively handle a broad spectrum of downstream tasks, such as open-domain question answering and solving math or science problems. While current LLM-based frameworks achieve strong performance on complex retrieval and reasoning tasks, they do so at a high computational cost. Additionally, they often lack structured, systematic problem-solving strategies, leading to unexpected failures. In particular, these models typically operate in an iterative, online, and isolated fashion—failing to exploit relationships across data sources, opportunities for offline computation, and the benefits of reusability—resulting in less-than-optimal outcomes. In contrast, traditional data management systems are engineered for both efficiency and accuracy, with careful coordination across all stages of the query pipeline. Inspired by these principles, this work proposes novel approaches to improve LLMbased retrieval and reasoning by incorporating optimization techniques from data systems. Our evaluation across a range of knowledge- and reasoning-intensive datasets demonstrates significant gains in both accuracy and computational efficiency.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits</title>
<link href="https://hdl.handle.net/1721.1/164665" rel="alternate"/>
<author>
<name>Bui, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/164665</id>
<updated>2026-01-30T03:24:51Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Fabrication of Superconducting Reflectionless Filters for Quantum Microwave Circuits
Bui, Eric
The performance and scalability of superconducting quantum circuits depends critically on the microwave environment. Minimizing signal reflections and suppressing thermal noise are essential for achieving high-fidelity readout and preserving qubit coherence. A significant challenge arises from the use of conventional cryogenic components such as isolators and circulators, which exhibit nonideal out-of-band reflection characteristics. Reflections degrade impedance matching and limit the performance of broadband quantum limited amplifiers. Superconducting implementations of reflectionless microwave filters offer a promising solution to mitigate these issues. The focus of this work is the fabrication and cryogenic characterization of reflectionless filters compatible with superconducting qubit fabrication flows. Devices were implemented on high resistivity silicon substrates using aluminum ground planes, integrated nichrome resistors, and crossovers formed with SiO2 interlayer dielectric. Cryogenic measurements at 20 mK demonstrate high return loss, confirming the viability of these filters for co-fabrication with traveling-wave parametric amplifiers (TWPAs) and circuit quantum electrodynamics (cQED) architectures. The filters exhibit low insertion loss in the passband to maintain quantum measurement efficiency and provide broadband reflection suppression across frequencies relevant to superconducting qubits, offering a scalable way to manage microwave noise in superconducting quantum processors.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval</title>
<link href="https://hdl.handle.net/1721.1/164664" rel="alternate"/>
<author>
<name>Dongo Aguirre, Gyalpo Melchisedeck</name>
</author>
<id>https://hdl.handle.net/1721.1/164664</id>
<updated>2026-01-30T03:24:23Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">CONDOR: Clinical Ontology-aware Networked Data Organization and Retrieval
Dongo Aguirre, Gyalpo Melchisedeck
Until now, state-of-the-art research into AI-driven clinical workflows has been confined to proprietary, closed-source systems from vendors like Epic and Oracle, or private experiments like Stanford’s ChatEHR, creating a critical barrier to academic innovation. This thesis introduces CONDOR, the first fully open-source and replicable research environment designed to simulate an agentic, conversational AI interacting with a high-fidelity Electronic Health Record (EHR). By integrating an open-source, FHIR-native EHR (Medplum) with a complex, realistic public clinical dataset (MIMIC-IV FHIR), CONDOR provides a foundational testbed that has been previously unavailable to the research community. The framework’s primary contribution is a novel alignment and evaluation methodology that adapts the principles of SelfCite to the clinical domain. We propose a ‘ClinicalConfidence‘ score to quantify the trustworthiness of generated statements and programmatically generate a high-quality preference dataset for alignment using Simple Preference Optimization (SimPO). We compare a standard vector-based Retrieval-Augmented Generation (RAG) baseline against a more advanced GraphRAG architecture that leverages a two-tiered knowledge graph of patient data and medical ontologies. Our results demonstrate that the full CONDOR system, combining GraphRAG with SimPO alignment, significantly improves citation quality and verifiability, establishing a new open-source benchmark for the development of safe and reliable clinical AI.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation</title>
<link href="https://hdl.handle.net/1721.1/164663" rel="alternate"/>
<author>
<name>Nair, Anushka Manchanda</name>
</author>
<id>https://hdl.handle.net/1721.1/164663</id>
<updated>2026-01-30T03:24:50Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Multi-Stage LLM Reasoning for Automated Detection and Classification of High-Impact Misinformation
Nair, Anushka Manchanda
As of 2025, social platforms have become a primary news source, magnifying the reach of misleading content [1]. Exposure to misinformation has been linked to shifts in public attitudes and behavior, including vaccine uptake [2] and voting behaviors [3]. However, current misinformation detection approaches can often focus on a narrow definition of misinformation: factual claims that can be clearly judged as true or false. However, recent research suggests the problem lies elsewhere: overt falsehoods (“vaccines contain microchips”) can carry little harm, while technically accurate but decontextualized narratives can be more influential. Allen et al. (2024) [4] found that factually accurate ”vaccine-skeptical” content had a much greater impact on vaccine hesitancy than misinformation flagged by fact-checkers. These narratives can work by omitting information, misleading framing, or cherry-picked evidence, forms of manipulation that can elude traditional fact-checking. Though professional fact-checkers are often able to recognize these tactics and the broader context of information, they cannot keep pace with the volume of online content. This thesis designs a Large Language Model (LLM) based pipeline meant to partner with, rather than replace, human fact checkers. The system decomposes content into its explicit and implicit claims, rhetorical tactics, and the “missing context” questions it raises; retrieves evidence from fact-check databases and reliable sources; and synthesizes grounded explanations while assigning calibrated harm scores to guide triage. Evaluated on fact-checked tweets, the pipeline matched expert judgments in 92.6% of cases where experts agreed, and flagged for review posts where experts disagreed, a gray zone requiring human judgment. The system’s explanations ranked higher than crowdsourced Community Notes in helpfulness, clarity, and trustworthiness when assessed by an LLM, and harm evaluations aligned with human reviewers in 87.5% of cases, enabling prioritization of content with greatest potential impact. Despite constraints of sample size and processing latency, the results demonstrate the feasibility of a human–AI workflow that treats disagreement as a signal and directs scarce attention towards high-impact misinformation that current automated systems can miss.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems</title>
<link href="https://hdl.handle.net/1721.1/164661" rel="alternate"/>
<author>
<name>Sneh, Tal</name>
</author>
<id>https://hdl.handle.net/1721.1/164661</id>
<updated>2026-01-30T03:24:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Spatial Emission and Polarization Control in Integrated Photonics for Optical-Trapping and Trapped-Ion Systems
Sneh, Tal
Recent advances in silicon photonics have yielded impressive results in fields including biophotonic optical tweezers and trapped-ion quantum systems. However, the majority of these demonstrations, while offering advantages in size, cost, and dense integration, lag behind their bulk-optic counterparts, limited by a lack of critical advanced functionality such as spatial control of light in the near field or polarization control at visible wavelengths. This thesis addresses this gap by designing and experimentally demonstrating the first, to the best of our knowledge, cell experiments using single-beam integrated optical tweezers, chip-based 3D printers, and integrated polarization rotators and splitters at blue wavelengths. First, we demonstrate optical trapping and tweezing of microspheres using a nearfield-focusing integrated optical phased array, at a standoff distance over two orders of magnitude larger than prior integrated demonstrations. We then use this system to perform the first cell experiments using single-beam integrated optical tweezers. Second, we use a tunable integrated optical phased array operating at red wavelengths to print designs in a visible-light-curing resin, demonstrating the first chip-based 3D printer. Third, we design and experimentally demonstrate the first integrated polarization rotators and splitters operating at blue wavelengths, enabling polarization control on chip for sophisticated integrated manipulation of trapped-ion and neutral-atom quantum systems. Finally, we develop key polarization-diverse integrated-photonics devices and utilize them to implement a variety of integrated-photonics-based polarization-gradient-cooling systems, culminating in the first demonstration of polarization-gradient cooling of a trapped ion by an integrated-photonics-based system.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription</title>
<link href="https://hdl.handle.net/1721.1/164659" rel="alternate"/>
<author>
<name>Parthasarathi, Sruthi</name>
</author>
<id>https://hdl.handle.net/1721.1/164659</id>
<updated>2026-01-30T03:24:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">ALPACA: An Algorithmic Pipeline for Automated&#13;
Contour Annotation of Carnatic Music:&#13;
A Dynamic Programming Framework for Pitch Segmentation and Note Transcription
Parthasarathi, Sruthi
In recent years, a wide range of computational techniques have been developed to extract information from recorded performances of Western music. However, these methods often achieve limited success when applied to non-Western musical traditions. Carnatic music, in particular, poses unique challenges due to the absence of a standardized notation system and the lack of a consistent mapping between frequency bands and note categories. This project introduces a dynamic programming–based transcription framework, incorporating novel methods for label estimation, contour segmentation, and related subtasks, and establishes the foundations for end-to-end automatic transcription of this art form.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Diverse Treatment Policies from Observational Health Data</title>
<link href="https://hdl.handle.net/1721.1/164658" rel="alternate"/>
<author>
<name>Ejilemele, Abe</name>
</author>
<id>https://hdl.handle.net/1721.1/164658</id>
<updated>2026-01-30T03:24:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modeling Diverse Treatment Policies from Observational Health Data
Ejilemele, Abe
Learning policies for real world tasks often requires modeling human behavior, especially in domains like healthcare and driving. In these settings, skills are learned from expert human demonstrations, but such data are typically multimodal, violating the common single expert assumption. We study sequential clinical treatment decision making in the offline imitation learning setting, where environment interaction is prohibited, reflecting the challenges of experimentation in safety critical domains. Existing methods for multi expert offline imitation learning often restrict the latent space, underspecify its structure, or omit objective terms that prevent latent collapse and encourage behavior discovery. We propose a fully offline approach that addresses these shortcomings and improves learning from multi expert demonstrations through modifications to the formulation of the latent approximate posterior and the model architecture. We suggest that our method is more robust to real world settings where the true number of demonstrators may not be known. We also incorporate an occupancy matching term into our objective that injects awareness of the rollout distribution over trajectories into our behavior cloning objective. We evaluate our method against baselines on both simulated multi expert demonstrations from an extended S-CVSim and real world demonstrations from MIMIC. Our approach achieves consistently higher next step action prediction and behavior discovery performance. While ground truth expert policies are unavailable for MIMIC, visual analysis shows our method uncovers clinically meaningful variations in expert strategies, reflecting treatment population diversity.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics</title>
<link href="https://hdl.handle.net/1721.1/164656" rel="alternate"/>
<author>
<name>Yankelevich, Beatriz</name>
</author>
<id>https://hdl.handle.net/1721.1/164656</id>
<updated>2026-01-30T03:24:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Towards a Modular Superconducting Quantum Processor&#13;
using Chiral Waveguide Quantum Electrodynamics
Yankelevich, Beatriz
As the field of superconducting quantum computing advances, networking qubits within a single system becomes essential for building modular processors. Modularity allows the system to circumvent scalability constraints and enable architectures and computational schemes that exploit non-local connectivity to enhance processing capabilities. This work proposes non-local entanglement generation methods based on the theory of chiral quantum waveguide dynamics, which is the quantum-optical framework that describes systems of atoms coupled non-reciprocally to a continuum of modes. We leverage these effects to design a chiral communication module composed of multiple superconducting qubits, capable of both directional single photon routing and the realization of chiral, driven-dissipative entanglement protocols.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction</title>
<link href="https://hdl.handle.net/1721.1/164655" rel="alternate"/>
<author>
<name>Kim, Ji Won</name>
</author>
<id>https://hdl.handle.net/1721.1/164655</id>
<updated>2026-01-30T03:24:32Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Fine-tuning Boltz for Antibody-Antigen Binding&#13;
Prediction
Kim, Ji Won
Accurate prediction of antibody-antigen binding is a central challenge in computational immunology. Its direct implication for therapeutic antibody design and vaccine development has made it one of the most rapidly growing fields. Recent advances in protein language models and structure prediction have provided new tools for modeling, yet these approaches often fall short in capturing the fine-grained features that drive binding specificity in antibody and antigens. This thesis evaluates multiple strategies for improving predictive performance. First, we investigate a custom multiple sequence alignment (MSA) experiment. Standard Boltz-2 training relies on MSAs from broad protein databases, which capture global diversity but under-represent lineage-specific constraints. To address this, we constructed antibody-specific MSAs to test whether restricting the search space to antibody repertoires improves model learning. Unfortunately, gains in downstream binding prediction were limited, suggesting that further work needs to be done in training models for specific databases in the first place. Our second line of investigation focused on fine-tuning Boltz-2, a generative structural foundation model, using curated antibody–antigen data. By leveraging Boltz-2’s internal sequence embeddings, we trained a predictive model for binding affinity. This approach yielded stronger ROC performance compared to baseline models, achieving a validation AUROC of 0.645, demonstrating the advantages of structural generative priors for antibody–antigen binding prediction.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deterministic Circuit Range Avoidance is (Likely) Intractable</title>
<link href="https://hdl.handle.net/1721.1/164654" rel="alternate"/>
<author>
<name>Ilango, Rahul</name>
</author>
<id>https://hdl.handle.net/1721.1/164654</id>
<updated>2026-01-30T03:24:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Deterministic Circuit Range Avoidance is (Likely) Intractable
Ilango, Rahul
Circuit Range Avoidance (denoted Avoid) is a computational problem where, given a Boolean circuit with more output bits than input bits, one must output a string outside of the range of the circuit. A simple counting argument implies that such a string must always exist and also guarantees that outputting a uniformly random string is correct with good probability. A natural question is whether this can be derandomized: does there exist an efficient deterministic algorithm for Avoid? We give the first evidence that deterministically solving Avoid is intractable. We show that there is no polynomial-time algorithm for Avoid under plausible assumptions in complexity theory and cryptography. Specifically, our assumptions are that NP ≠ coNP and that subexponentially-secure indistinguishability obfuscation exists.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities</title>
<link href="https://hdl.handle.net/1721.1/164652" rel="alternate"/>
<author>
<name>Ranade, Esha</name>
</author>
<id>https://hdl.handle.net/1721.1/164652</id>
<updated>2026-01-30T03:24:42Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">An Empirical Evaluation of LLMs for the Assessment of Subjective Qualities
Ranade, Esha
Large Language Models (LLMs) have achieved remarkable success in natural language processing tasks and are increasingly being used for language generation. Significant advancements in this field have unlocked capabilities that enable their adoption in sophisticated roles, including acting as evaluators or "judges" of text for various attributes such as factuality, relevance, fluency, and reasoning quality. However, their understanding and ability to assess subjective attributes, such as the level of formality in a piece of writing, and produce content matching these subjective attributes remains unclear and underexplored. This research develops a methodology to study how LLMs evaluate subjective attributes. It has three primary contributions: (i) a reproducible user study to generate human-annotated labels for different attributes, (ii) an analysis of the extent to which different LLMs provide subjective labels aligned with human annotators, and (iii) an analysis of the extent to which LLMs generate content aligned with specified intended subjective labels, relative to humans. The user study and the analyses have been conducted both with and without a reference scale. The scale itself, the survey design, and the evaluation questions have all undergone multiple rounds of iteration informed by study tester feedback to improve clarity, consistency, and reliability for the final study. Comparisons between human-generated ratings and LLM-generated ratings for both human-generated content and LLM-generated content reveal the extent to which LLMs align with human judgment, providing insights into their capabilities and limitations. While humans typically do better in their roles, LLMs are able to attain reliably high levels of success in producing and judging text, despite tending to err on the more-formal side. Both groups’ performance increases significantly with the aid of a formalized reference scale. Across the suite of models tested, OpenAI’s GPT family leads overall performance, with Anthropic’s Claude and Meta’s LLaMA series showing notable strengths in specific formality ranges. Although this work focuses on the formality attribute of text, the methodology developed can be used to evaluate other subjective qualities of text, such as conciseness, usefulness, or persuasiveness. Ultimately, these findings may guide future efforts to fine-tune LLMs to produce text that more precisely matches the desired stylistic or ethical standards.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Burst Parallelism of SigmaOS processes with CRIU</title>
<link href="https://hdl.handle.net/1721.1/164651" rel="alternate"/>
<author>
<name>Tang, Frederick</name>
</author>
<id>https://hdl.handle.net/1721.1/164651</id>
<updated>2026-01-30T03:24:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Accelerating Burst Parallelism of SigmaOS processes with CRIU
Tang, Frederick
σOS is a multi-tenant cloud operating system designed to integrate the agility of serverless environments with the interactivity of microservices. A goal of achieving this integration is the ability to start new instances of server processes quickly. However, σOS only handles σcontainer initialization, and does not assist with runtime and app initialization costs. One approach to overcome this challenge is to checkpoint processes using Checkpoint/Restore in Userspace (CRIU). CRIU is a linux toolset which can start new server instances by restoring them from a saved checkpointed state, avoiding the full cost of reinitialization and setup. This thesis introduces σCRIU, which adapts CRIU for burst-parallel spawning of microservices in σOS. σCRIU implements a number of optimizations: compressing checkpointed proc metadata to reduce network communication costs, implementing demand-paging using a lazy page service, and caching kernel metadatadata to reduce CRIU’s restore operation latency. These optimizations allow σCRIU to start new microservices on remote machines quickly while still making use of CRIU’s existing proven checkpoint and restore technology.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)</title>
<link href="https://hdl.handle.net/1721.1/164649" rel="alternate"/>
<author>
<name>Gosalia, Mehek</name>
</author>
<id>https://hdl.handle.net/1721.1/164649</id>
<updated>2026-01-30T03:24:39Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Visually Accurate Database-Enabled Reconstructions of Scenes (VADERS)
Gosalia, Mehek
This work introduces a novel pipeline for scene reconstruction that jointly prioritizes semantic accuracy and visual fidelity, addressing a gap in current approaches. Prior pipelines often emphasize either semantic analysis or photorealistic rendering, but rarely both. This method combines scene analysis, segmentation, and retexturing to yield reconstructions that preserve structural semantics, while convincingly reflecting the visual qualities of the original image. The motivation lies in the limitations of existing systems. Existing databaseassisted approaches depend on proprietary datasets that restrict stylistic diversity or using in-the-wild assets. This constrains expressiveness and often produces results that are visually misaligned. Conversely, pipelines optimized for visual realism neglect semantic correctness, generating outputs that may appear plausible but lack categorical or structural grounding. Our framework addresses this by first enforcing semantic accuracy via selecting database assets, then editing those assets to be stylistically faithful to the reference, producing reconstructions that are both interpretable and expressive. We begin with database-assisted scene analysis, using an open-source asset database containing chairs, lamps, sofas, tables, and benches. Input images are depth-mapped, segmented, and parsed into object masks, which are matched to database assets based on semantic labels and visual correspondence. Each asset is broken into semantic segments and rescaled per-component using vision-language model predictions to match the reference object better. Finally the asset is retextured based on the image mask of the reference object in the input image. Evaluation on six diverse scenes—both photographs and artworks—shows the pipeline produces semantically grounded, visually accurate reconstructions under non-research conditions. Future work will focus on expanding the asset database, reducing reliance on proprietary texturing, and releasing an open-source implementation to broaden accessibility.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization</title>
<link href="https://hdl.handle.net/1721.1/164648" rel="alternate"/>
<author>
<name>Wang, Janet Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/164648</id>
<updated>2026-01-30T03:24:48Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Designing Planar Silicon Solar Cells for Singlet Fission&#13;
Sensitization
Wang, Janet Z.
Singlet fission (SF)-sensitized silicon (Si) solar cells offer a path towards surpassing the Shockley-Queisser efficiency limit for single-junction solar cells. However, realizing efficient charge transfer from the SF material to Si remains a significant challenge that requires careful interface engineering. Prior work showed that Si microwire cells sensitized with tetracene (Tc) and a zinc phthalocyanine (ZnPc) donor layer can boost photocurrent and external quantum efficiency (EQE). Planar devices are simpler to fabricate than microwire devices and reproduce the planar geometry of optical test samples to connect studies of the interface to device performance. This thesis integrates modeling and experimental approaches to guide the design of planar SF-sensitized Si solar cells. We developed a fabrication process for planar cells comparing varied oxide passivation layer growth conditions and surface treatments, Si(100) versus Si(111) orientation, and junctions formed by diffusion doping versus ion implantation. Complementary surface photovoltage (SPV) measurements on matching optical stacks show evidence of an illumination-induced transient positive charge density at the Tc/ZnPc/oxide/Si interface, consistent with increased field effect passivation. We find that SPV responses on AlOx/n-Si are dominated by substrate band bending; consequently, SiOx is the preferred passivation to suppress the background and isolate the SPV signals driven by the organics. A drift–diffusion model shows that the diffusion doping (exponential) emitters reduce surface recombination rates compared to ion implantation (Gaussian) emitters. We also show that a positive fixed charge density at the surface enhances short wavelength EQE, with the effect strongest for Gaussian emitters. Together, these results provide practical design rules for planar SF-sensitized Si cells and the study of charge transfer at organic-Si interfaces.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microsecond Time Synchronization for Computing Fiber&#13;
Networks</title>
<link href="https://hdl.handle.net/1721.1/164647" rel="alternate"/>
<author>
<name>Li, Jenny Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/164647</id>
<updated>2026-01-30T03:24:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Microsecond Time Synchronization for Computing Fiber&#13;
Networks
Li, Jenny Y.
We present a microsecond-accurate time synchronization method and time localization system for a sensor network of spatially-separated, low-power Bluetooth nodes, with the goal of integrating this system into thermally-drawn computing fibers. Each node consists of an nRF54L15 SoC paired with an ICS-43434 digital I2S microphone, enabling synchronized audio data collection. Our design leverages Bluetooth LE connection events to synchronize local clocks with sub-10 µs accuracy across a multi-peripheral topology; we trigger precise, CPU-independent hardware events to timestamp audio samples. We demonstrate that timestamped I2S data stored in external SPI flash can be correlated across devices to extract TDoA measurements for localizing sound sources. Cross-correlation techniques allow us to estimate direction and position, with localization errors reduced from 4.17 m to 0.39 m through clock synchronization. This prototype provides a roadmap for embedding synchronized sensing and computation within fibers and smart textiles, with implications for on-body audio perception and distributed sensing in flexible electronics.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From String to Structure: Graph Threading for Physical Assembly</title>
<link href="https://hdl.handle.net/1721.1/164646" rel="alternate"/>
<author>
<name>Lin, Rebecca Y. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/164646</id>
<updated>2026-01-30T03:24:37Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From String to Structure: Graph Threading for Physical Assembly
Lin, Rebecca Y. E.
Many artistic and engineering applications—from beadwork to deployable structures—create intricate, and sometimes dynamic, designs by threading cord through tubular components. We model the underlying design challenge—threading tubes so that they achieve a target connectivity when the string is pulled taut—as graph threading. In this formulation, tubes and their junctions correspond to edges and vertices of a graph, and the goal is to find a closed walk that induces a connected graph at every vertex while avoiding U-turns. We study two optimization objectives motivated by fabrication and deployment: minimizing length to reduce material cost and assembly time, and minimizing turn to reduce frictional resistance during deployment. For the length metric, we present a polynomial-time algorithm via reduction to minimum-weight perfect matching, prove tight worst-case bounds on optimal threadings, and identify special cases with faster algorithms. For the turn metric, we characterize the complexity landscape, proving NP-hardness for graphs of maximum degree 4, tractability for degree 3, and giving exact and approximation algorithms for restricted variants, including rectangular grid graphs. Finally, we turn from theory to fabrication, proposing multi-configuration threading—a new approach for achieving multiple predetermined configurations within a single system. As in earlier chapters, framing the problem in graph-theoretical terms provides access to powerful problem-solving techniques, guiding both algorithmic analysis and physical design.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction</title>
<link href="https://hdl.handle.net/1721.1/164644" rel="alternate"/>
<author>
<name>Khoo, Ling Min Serena</name>
</author>
<id>https://hdl.handle.net/1721.1/164644</id>
<updated>2026-01-30T03:24:40Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Data Attribution-Based Approach to Model Diagnosis&#13;
in LC-MS/MS Structure Prediction
Khoo, Ling Min Serena
Elucidating the structure of small molecules from complex mixtures using liquid chromatography tandem mass spectrometry (LC-MS/MS) is a challenging task with far-reaching implications in many areas such as drug discovery, environmental science and metabolism research. Yet, despite its importance and significant efforts to develop machine learning (ML) models for the task of elucidating the molecular structures of unknown compounds from LC-MS/MS spectra, the performance of these ML-based models remains limited. As a result, the performance of current ML-based models has been reported as insufficient for practical applications, thereby warranting a deeper investigation into their limitations to advance ML-based molecular structure elucidation from LC-MS/MS and enable their utility in real-world settings. Here, we leverage data attribution methods to systematically identify and validate hypotheses about the sources of generalization challenges that hinder current model performance. Our goal is to automatically uncover insights into the failure modes of existing ML models for LC-MS/MS, thereby laying the foundation for developing more robust and accurate models.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Dynamic Objects in Scenes with Generative Particle Systems</title>
<link href="https://hdl.handle.net/1721.1/164643" rel="alternate"/>
<author>
<name>Li, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/164643</id>
<updated>2026-01-30T03:24:31Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modeling Dynamic Objects in Scenes with Generative Particle Systems
Li, Eric
Humans readily interpret the motion of deformable and rigid bodies, even when encountering unfamiliar objects with minimal shape or texture cues. In such cases, motion serves as a critical signal for recognition and understanding. Inspired by this ability, we propose a generative model that represents 3D matter as small Gaussians (“particles”) drawn from clusters capturing groups of coherently moving matter. We develop an e!cient inference algorithm based on parallelized block Gibbs sampling to recover stable particle motion and rigid groupings. Our model provides a tractable, object-centric generalization of as-rigidas-possible (ARAP) regularizers used in motion tracking. To assess alignment with human perceptual judgments, we test our approach on random dot kinematograms—sparse motion displays in which dot trajectories convey latent object structure, often used to probe visual understanding of motion and grouping. In this setting, our approach captures human-like responses, including graded patterns of uncertainty across ambiguous conditions. Applied to naturalistic RGB videos, it infers dense particle representations that track object motion and deformation over time. These results demonstrate that our model enables persistent latent scene structure suitable for object-level reasoning.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Arm Qubit for Faster, Higher Fidelity Readout and Gates</title>
<link href="https://hdl.handle.net/1721.1/164642" rel="alternate"/>
<author>
<name>Kline, Jeremy B.</name>
</author>
<id>https://hdl.handle.net/1721.1/164642</id>
<updated>2026-01-30T03:24:44Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Arm Qubit for Faster, Higher Fidelity Readout and Gates
Kline, Jeremy B.
Currently, superconducting qubit processors are bottlenecked by errors during two-qubit gates, readout, and idle time. All three error contributions could be reduced if we improved the speed of operations (without introducing additional leakage errors) compared to the qubit lifetime. Readout and two-qubit gates are multimode interactions and therefore are limited by the coupling strength between the modes. In this thesis, we introduce a two-mode superconducting qubit which uses one mode to facilitate strong coupling to other modes of the quantum processor and one mode to store data with high coherence. Simulations show that this architecture could enable order-of-magnitude reductions in error during readout and two-qubit gates.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering Algorithms for Component Placement in Printed Circuit Boards</title>
<link href="https://hdl.handle.net/1721.1/164641" rel="alternate"/>
<author>
<name>Petrusenko, Vlada</name>
</author>
<id>https://hdl.handle.net/1721.1/164641</id>
<updated>2026-01-30T03:24:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Clustering Algorithms for Component Placement in Printed Circuit Boards
Petrusenko, Vlada
In 2024, approximately 12 billion printed circuit boards (PCBs) were manufactured globally [1], with the trend increasing gradually, and the majority of PCB layouts still being completed manually. The manual design process amounts to millions of hours of tedium that can be eased with automation. One of the biggest challenges is that the complex Printed Circuit Board designs typically have hundreds, sometimes thousands of components and even more net connections between them. This makes both manual and automated placement very time-consuming. As a way to improve placement performance, in this thesis, we constructed a custom weighted undirected graph representation of components and nets for any board that would encode physical and electrical constraints. Additionally, we integrated the Louvain and Leiden clustering algorithms for component clustering in PCB placement. We also showed comparative metrics with the spectral clustering algorithm applied to unweighted graph representations, which is the prior state of this project, but it has no knowledge of electrical and physical constraints associated with PCB designs and would thus produce results that require more manual correction. This new clustering approach was able to generate more optimal clustering and reduced average runtime by 51.05%, decreased estimated length of routing by 7.72%, and improved component association score by 12.8%.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis</title>
<link href="https://hdl.handle.net/1721.1/164602" rel="alternate"/>
<author>
<name>McGreivy, James C.</name>
</author>
<id>https://hdl.handle.net/1721.1/164602</id>
<updated>2026-01-21T04:07:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Seeing the Forest Through the Trees: Knowledge Retrieval for Streamlining Particle Physics Analysis
McGreivy, James C.
Generative Large Language Models (LLMs) are a promising approach to structuring knowledge contained within otherwise unmanageable corpora of research literature produced by large-scale and long-running scientific collaborations. Within experimental particle physics, such structured knowledge bases could expedite methodological and editorial review. Complementarily, within the broader scientific community, generative LLM systems grounded in published work could make for reliable companions allowing non-experts to analyze openaccess data. Techniques such as Retrieval Augmented Generation (RAG) rely on semantically matching localized text chunks, but struggle to maintain coherent context when relevant information spans multiple segments, leading to a fragmented representation devoid of global cross-document information. In this work I utilize the hierarchical organization of experimental physics articles to build a tree representation of the corpus, and present the SciTreeRAG system which leverages this structure with the aim of constructing contexts more focused and contextually rich than a standard RAG. Additionally, I develop methods for using LLMs to transform the unstructured corpus into a structured knowledge graph representation. I then implement SciGraphRAG, a retrieval system that leverages this knowledge graph to access global cross-document relationships eluding standard RAG, with the goal of encapsulating domain-specific connections and expertise. I demonstrate proof-of-concept implementations of both systems using the corpus of the LHCb experiment at CERN.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications</title>
<link href="https://hdl.handle.net/1721.1/164601" rel="alternate"/>
<author>
<name>Gower, Elizabeth Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/164601</id>
<updated>2026-01-21T04:07:47Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Development of Low-Cost In Situ Gas Sensors for Oceanographic Applications
Gower, Elizabeth Ann
Anthropogenic activity has increased atmospheric carbon dioxide (CO₂) levels, disrupting the global carbon cycle and driving widespread environmental change. The ocean acts as a major sink. Accurate and scalable in situ monitoring of oceanic carbon chemistry is vital for understanding the impacts of climate change and informing marine carbon dioxide removal (mCDR) strategies. Many existing in situ instruments for marine applications are constrained by their size, cost, power requirements, or reliance on consumable reagents. Developing low-cost, compact, low-power, and accurate in situ sensors would significantly enhance the spatiotemporal resolution of oceanographic data and enable widespread monitoring of dissolved gases throughout the ocean. This, in turn, would deepen our understanding of how, where, and when changes are occurring within the marine carbon cycle. Two key variables essential for studying this cycle are the partial pressure of carbon dioxide (pCO₂) and dissolved inorganic carbon (DIC). This thesis presents the development of two sensors, one for in situ pCO₂ measurement and another for novel DIC quantification, both designed to be affordable, reliable, and scalable tools for advancing our understanding of ocean chemistry and the global carbon system. First, the development, calibration, and open-ocean deployment of a miniaturized Dissolved Multi-Gas Sensor (DMGS) that measures pCO₂ and partial pressure of oxygen (pO₂) is presented. The sensor was integrated into a custom-built surface drifter designed to entangle with Sargassum mats and send data autonomously. The drifter utilized commercial off-theshelf (COTS) components and cost roughly $1000 to build. After lab testing, a drifter was deployed in the Great Atlantic Sargassum Belt (GASB) and collected data for 22-days. In addition to gas data, the drifter tracked temperature, light intensity, humidity, pressure, and location sending measurements via an Iridium satellite. The resulting data captured dynamic changes in localized gas concentrations, temperature, and light levels that highlighted photosynthetic and respiratory activity within Sargassum patches. These drifters demonstrate the value of in situ data to investigate marine biogeochemical processes that contribute to the marine carbon cycle, especially in areas with high biologic activity. Next, this thesis presents the iterative development of a novel DIC sensor with potential for future in situ applications. Initial prototypes tested the feasibility of using a COTS CO2 sensor in both static and flow-through configurations, however sensor saturation issues prompted a shift to a pressure-based detection method. Multiple test setups were evaluated for pressure stability and sensor sensitivity, culminating in a bottle-based flow system that demonstrated the potential for reagent-minimized, pressure-based DIC quantification. With the final setup, a COTS pressure sensor that sat behind a gas permeable membrane was found to repeatably and accurately quantify DIC from acidified seawater. This approach of quantifying DIC via pressure change is novel in the field of gas sensing and maintains a low-cost, accessible design. Together, the sensors developed in this thesis expand the toolkit for marine carbon monitoring and provide a foundation for affordable, distributed sensing networks. These technologies enable higher-resolution insights into ocean biogeochemistry and support critical monitoring, reporting, and verification (MRV) frameworks needed to evaluate the effectiveness of mCDR techniques. Continued refinement of these low-cost platforms could play a key role in understanding and mitigating anthropogenic impacts on marine systems.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View</title>
<link href="https://hdl.handle.net/1721.1/164600" rel="alternate"/>
<author>
<name>Firouzian, Fardean</name>
</author>
<id>https://hdl.handle.net/1721.1/164600</id>
<updated>2026-01-21T04:07:55Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Applying Reference Class Forecasting to Multifamily Investments: Identifying and Capturing Operational Alpha through the Outside View
Firouzian, Fardean
This thesis applies Reference Class Forecasting (RCF) to multifamily real estate underwriting as a means of countering optimism bias, strategic misrepresentation, and other distortions embedded in the traditional “inside view.” Adapted from its proven application in infrastructure and corporate capital budgeting, RCF anchors projections in the actual performance distributions of comparable assets rather than in deal-specific narratives. The research centers on the development of the “Comp Warehouse,” a structured repository of property-level financials organized by market, asset class, vintage, and unit scale. By benchmarking assumptions against statistically valid reference classes, the approach enforces empirical discipline and highlights opportunities for “operational alpha”—the marginal increase in net operating income (NOI) achieved when underperforming assets converge on median peer performance. A South Florida case study demonstrates the method’s utility in an acquisition context. Analysis of 48 assets across Melbourne, Miami, Fort Lauderdale, and West Palm Beach shows that while rent levels cluster tightly around market medians, operating expenses vary widely, producing large dispersion in realized NOI. Applying the framework to a 191-unit Class A property in Fort Lauderdale illustrates how RCF can ground underwriting assumptions by distinguishing between defensible revenue-driven growth strategies and less plausible expense-reduction projections proposed in a bidding scenario. Recognizing constraints of both scale and frequency, this thesis also explores artificial intelligence as a tool for automating the ingestion and standardization of operating statements and rent rolls. Properly deployed in a human-in-the-loop framework, AI can reduce data friction, expand sample sizes, and sharpen forecasting precision. The contribution of this thesis is twofold: it demonstrates the feasibility of applying RCF to the multifamily sector—an asset class whose relative standardization, liquidity, and data availability make it especially conducive to outside-view benchmarking—and it situates the methodology within a technology-native architecture designed to scale empirical discipline, enhance underwriting rigor, and systematically capture operational alpha.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications</title>
<link href="https://hdl.handle.net/1721.1/164599" rel="alternate"/>
<author>
<name>He, Kaiwen</name>
</author>
<id>https://hdl.handle.net/1721.1/164599</id>
<updated>2026-01-21T04:07:53Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Concretely-Efficient Multi-Key Homomorphic Secret Sharing and Applications
He, Kaiwen
Homomorphic secret sharing (HSS) is a powerful cryptographic primitive that enables efficient, low-communication secure computation without the use of fully homomorphic encryption. Public-key HSS is a well-known variant that supports inputs from multiple parties, but all parties must agree on a joint public key before any party can encode their inputs, requiring extra rounds of communication in applications. Recently, Couteau et al. (EUROCRYPT 2025) constructed multi-key HSS (MKHSS)—a new primitive which allows parties to encode their inputs under independent keys—under the DCR assumption. MKHSS assumes only a reusable common reference string, without the need for prior interactions between parties or a public-key infrastructure. In this paper, we construct and implement the first concretely-efficient MKHSS scheme under the same assumptions used by Couteau et al. Using an algorithmic insight that reduces the largest modulus in Couteau et al. from N⁴ to N², our optimized implementation can homomorphically multiply inputs in 5.0 milliseconds—while an implementation of Couteau et al. requires 224.6 milliseconds—thereby achieving a 45× speedup. A powerful application of MKHSS is to realize attribute-based non-interactive key exchange (ANIKE), which generalizes password-based key exchange (PAKE) to arbitrary attribute policies. ANIKE is currently only known from MKHSS. We use our implementation to evaluate the first concretely-efficient ANIKE schemes for a range of practically useful policies. Using our implementation, two parties can perform a geolocation-based key exchange in 1.65 seconds and a fuzzy PAKE on an 8-word passphrase in 7.59 seconds for realistic parameters, on a single core. Compared to using Couteau et al., which requires 62.5 and 253 seconds, we achieve 38× and 33× speedups, respectively.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reciprocity and Normality in the Scattering Matrix of Disordered Media</title>
<link href="https://hdl.handle.net/1721.1/164598" rel="alternate"/>
<author>
<name>Bharadwaj, Shreyas K.</name>
</author>
<id>https://hdl.handle.net/1721.1/164598</id>
<updated>2026-01-21T04:07:46Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reciprocity and Normality in the Scattering Matrix of Disordered Media
Bharadwaj, Shreyas K.
The scattering matrix formalism provides a practical characterization of wave transport in linear, source-free systems by relating a set of operationally defined input and output spatial channels. The matrix is structured as a block operator, with diagonal blocks encoding same-side reflection matrices (RMs) and off-diagonal blocks encoding transmission matrices (TMs) in opposing propagation directions. Under Helmholtz reciprocity, symmetry relations are imposed: RMs are symmetric, and forward and reverse TMs are mathematical transposes of each other. These relations were employed as constraints to correct system-induced aberrations in measured scattering matrices of complex optical media via a matrix-based gradient descent procedure. Resulting phase corrections corresponded closely with classical aberration modes without heuristic parameterizations, suggesting that these modes naturally arise to restore reciprocity-induced symmetry. Vectorial TMs were measured for single- and double-pass propagation through step-index MMFs and scattering samples, with corrected phase terms showing agreement across sample types. Furthermore, matrix normality was introduced as a descriptor of stable modal transport. Normal matrices admit unitary diagonalization, reflecting orthogonal eigenchannels and spectrally coherent propagation. Near-normal behavior was observed in fiber TMs, while RMs of scattering slabs remained strongly non-normal, as quantified by a normalized Henrici departure. Sufficient conditions for normality were identified in terms of the system Green’s function and its bi-compression onto the measurement basis. A complementary dispersion experiment investigated two regimes: nearly-normal MMFs, where the Wigner–Smith time-delay operator was jointly diagonalizable and supported accurate first-order spectral models; and mechanically compressed fibers, where loss of normality produced noncommuting operators and collapse of model fidelity. These results suggest that normality captures well-behaved modal transport, underpinning the validity of parametric models and other operator-based analyses of disordered media. Together, reciprocity and normality impose complementary constraints on wave transport: reciprocity governs global symmetry, while normality captures internal coherence of modal propagation. Relevance is noted for matrix-based imaging, inverse scattering theory, and non-Hermitian wave physics, where symmetry and modal stability remain central.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mesh Differentiable Rendering for Real-World Scenes</title>
<link href="https://hdl.handle.net/1721.1/164597" rel="alternate"/>
<author>
<name>Charatan, David</name>
</author>
<id>https://hdl.handle.net/1721.1/164597</id>
<updated>2026-01-21T04:07:54Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Mesh Differentiable Rendering for Real-World Scenes
Charatan, David
Differentiable rendering has established itself as an effective tool for 3D reconstruction and novel view synthesis. Most state-of-the-art differentiable rendering methods use purpose-built renderers to optimize specialized, nonstandard 3D representations. However, most downstream applications of differentiable rendering rely on 3D meshes, which are near-universally supported due to their suitability for a wide range of rendering, simulation, and 3D modeling workflows. While prior methods have explored using 3D meshes directly within gradient-based optimization, they have been limited to object-centric scenes and cannot reconstruct real-world, unbounded scenes. This work addresses this shortcoming via a differentiable rendering formulation that combines an off-the-shelf, non-differentiable triangle rasterizer with a 3D representation that consists of nested mesh shells. During every forward pass, these shells are extracted from an underlying signed distance field. Then, the shells are independently rasterized and the resulting images are alpha-composited using opacities derived from the shells' per-vertex signed distance values. Notably, the shells' vertex positions are updated only via the underlying signed distance field, not via backpropagation through the rasterizer itself. This makes our method compatible with off-the-shelf, non-differentiable triangle rasterizers. To the best of our knowledge, our method is the first differentiable mesh rendering method that scales to unbounded, real-world 3D scenes, where it produces high-quality novel view synthesis results whose quality approaches the quality of state-of-the-art, non-mesh-based methods. Our method's performance is also competitive with state-of-the-art surface rendering methods on object-centric scenes. Ultimately, our method suggests that it may be possible to solve the differentiable rendering problem using tools from the conventional graphics toolbox rather than relying on specialized renderers.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning</title>
<link href="https://hdl.handle.net/1721.1/164596" rel="alternate"/>
<author>
<name>Duguey, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/164596</id>
<updated>2026-01-21T04:07:49Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Task-Aware Spatial and Temporal Aggregation for Capacity Expansion Planning
Duguey, Gabriel
As we plan tomorrow’s electricity system, we face fundamental questions: where should new power plants go, which technologies deserve investment, and how much transmission is enough? These decisions are the domain of Capacity Expansion Planning (CEP), a class of optimization models that guide long-term infrastructure investments in power systems. To be realistic, CEP models must capture fine-grained spatial and temporal variations because demand varies by city and climate, while wind and solar output depend on weather patterns that shift hour by hour and location by location. But representing the system with thousands of time steps and hundreds of nodes makes the optimization problem computationally too large to solve. &#13;
&#13;
This thesis addresses the core question: how can spatial and temporal aggregation in CEP models be designed to preserve planning-relevant patterns that drive investment decisions? Existing approaches often treat aggregation as a neutral preprocessing step, relying on heuristics like political boundaries or geographic proximity. In contrast, we propose a task-aware pipeline that treats aggregation as an integral modeling decision, explicitly aligned with planning objectives.&#13;
&#13;
The approach builds a composite similarity metric that blends diverse planning-relevant signals, including, but not limited to, duration curves, ramping behavior, and spatial correlation, and uses k-medoids clustering to define spatial zones. Temporal aggregation is then applied to daily system-wide profiles, selecting representative days that maintain cross-zonal interactions. The result is a reduced spatio-temporal dataset fed into a CEP model. The resulting investment decisions are re-evaluated at full resolution to evaluate their feasibility and real cost.&#13;
&#13;
Experiments on a New England case study show the pipeline consistently outperforms common baselines like political boundaries, geographic proximity, or capacity factor statistics. Among 50 feature weightings, the best design reduces system cost by 13% compared to heuristics. Correlation-based features drive the best results, while raw amplitude and geographic location often degrade performance when used alone.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development</title>
<link href="https://hdl.handle.net/1721.1/164595" rel="alternate"/>
<author>
<name>McDonough, Kate</name>
</author>
<id>https://hdl.handle.net/1721.1/164595</id>
<updated>2026-01-21T04:07:45Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Earth as Equity Partner: A Revolutionary Approach to Ecological Conservation through Housing Development
McDonough, Kate
Duddington Farm is a 312-acre site north of Baltimore, Maryland. A stream restoration project was completed at the location nearly a decade ago in concert with the State of Maryland, the Manor Conservancy, Ecotone, and landowners Harry and Tara McDonough. The project was conducted with some success, however due to a lack of State oversight and long-term management provisions, the ecology has since declined. The following proposal outlines a new model for long-term land restoration and conservation, whereby land conservation and restoration are financed not solely through short term grants and fragile easements, but through the thoughtful use of modest real estate interventions. A small cluster of homes is developed on one portion of the site. The act increases the value of the land, generates equity, and establishes a permanent conservation fund. The design protects habitat and invites people into a deeper relationship with the natural world. The plan offers scalability in taking the land value capture and applying it to future land conservation projects, compounding returns and projecting a model to preserve hundreds of thousands of acres of critical land across the United States. This model highlights Indigenous ecological knowledge (TEK) and traditional practices of engaging with the land, highlighting a deeper understanding of how humans and nature can coexist in mutually healthy ways. The model is designed at a time when watersheds, national parks, and old-growth forests are faced with the greatest threat to global ecology. Duddington Farm is used as a retrospective case, but the broader goal is to create a regenerative framework for conservation-based development across critical watershed regions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Finite Elements</title>
<link href="https://hdl.handle.net/1721.1/164593" rel="alternate"/>
<author>
<name>Collin, Teodoro Fields</name>
</author>
<id>https://hdl.handle.net/1721.1/164593</id>
<updated>2026-01-21T04:07:51Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Automated Finite Elements
Collin, Teodoro Fields
Finite element methods (FEMs) are a powerful and ubiquitous tool for solving engineering problems. Experimenting with different finite elements can improve the quality and efficiency of solutions. Furthermore, in some cases, the wrong (but nonetheless most common) choice of finite element will produce solutions which converge to the wrong answer regardless of mesh resolution. However, in practice, the choice of finite element is not explored due to the complexity of re-deriving and re-implementing finite element methods. Trying a new finite element is challenging because practitioners must manually deduce formulas to use these elements and they must implement these formulas within the context of a potentially complex system. We address this problem by introducing ElementForge, a finite element system that is parametric over the literate mathematical specification of a finite element in a domain-specific language (DSL). The ElementForge compiler reasons about tensor spaces, tensors, and tensor bases from first principles to derive implementations of finite elements. The ElementForge compiler is able to automatically derive implementations of finite elements previously only derived by hand. Further, ElementForge minimally couples several key mathematical concepts, mainly tensor fields, mesh topologies, sparse tensors, and assembled finite element operators, to produce a complete finite element system that is parametric over the choice of element. Consequently, the elements derived by the compiler can be applied parametrically to new meshes, PDEs, and boundary conditions. We evaluate our system by implementing several simulations with different finite elements, demonstrating that our system can explore tradeoffs in generality, accuracy, speed, and representational complexity. For example, we are able to implement the Morley, Bell, Argyris, and Hermite like elements with less than 50 lines of code and use them all in a single simulation.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture</title>
<link href="https://hdl.handle.net/1721.1/164588" rel="alternate"/>
<author>
<name>Cao, Biru</name>
</author>
<id>https://hdl.handle.net/1721.1/164588</id>
<updated>2026-01-21T04:07:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">LumiModeling: A Gaussian Splatting-Based Tool for Recreating Dynamic Material and Lighting Interaction in Architecture
Cao, Biru
This thesis presents LumiModeling, a real-time visualization tool based on Gaussian Splatting (GS) that simulates the dynamic interplay between materiality and lighting in architectural environments. While conventional design workflows rely on geometric modeling and photorealistic rendering, they often abstract complex material behaviors and fall short in capturing light-material interactions. In contrast, GS enables the reconstruction of high-fidelity 3D models from 2D image sets, representing viewdependent effects such as reflection, transparency, and surface roughness. A comparative analysis using real-world data from the MIT Stata Center and the Met Warehouse demonstrates GS’s advantages over mesh-based photogrammetry, particularly in rendering reflective and transparent materials. This work extends existing GS capabilities by implementing a relightable pipeline based on the existing model Relightable3DGaussian (Gao et al., 2023), in which each Gaussian point is augmented with physical parameters, including BRDF, surface normals, and incident lighting. The Stata Center dataset is used to test the relighting of GS. A user study involving architecture professionals reveals that perceptual focus shifts from geometry to materiality and lighting as visual realism increases. The findings highlight the potential of relightable GS in architectural visualization and anticipate its integration into future design workflows.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation</title>
<link href="https://hdl.handle.net/1721.1/164587" rel="alternate"/>
<author>
<name>Kupershmidt, Adi</name>
</author>
<id>https://hdl.handle.net/1721.1/164587</id>
<updated>2026-01-21T04:07:52Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Urban Data Memory: Using Generative AI to Structure and Visualize Zoning Data for Urban Planning Evaluation
Kupershmidt, Adi
Urban planners face significant challenges in systematically and quantitatively evaluating past planning practices, stemming, among other reasons, from the scarcity of accessible structured data. The period from a plan’s initiation to implementation can span generations; recorded data from the planning processes are often deemed obsolete for addressing present concerns by the time of post-occupancy evaluation. This research examines whether generative AI can help bridge this gap and under what conditions - highlighting both challenges and opportunities - by introducing a system that responsively transforms qualitative zoning data into structured, queryable formats to support the quantitative analysis of planning practices. &#13;
A database of ~150 approved semi-structured urban plans under Tel Aviv municipality’s local jurisdiction supports this project's case study. The system relies on proprietary LLMs (ChatGPT, Claude), streamlining a natural language query input through 3 agentic tasks: (1) RAG (Retrieval Augmented Generation) based querying, generating free-text answers from all plans, (2) structuring the answers to a valid JSON, and (3) visualizing structured data. Key findings indicate an 85.45% precision of the system, as evaluated through an end-to-end assessment of 11 representative queries, each validated against 40 manually labeled plans. The tool provides actionable insights, enabling queries such as trends in sheltered bicycle parking approvals or the status of affordable housing planning over the past decade.&#13;
This research underlines the significance of flexibly structuring non- and semi-structured data for urban science. It addresses the growing gap between static legacy data collection and real-time policymaking, democratizing access to planning information and fostering informed decision-making practices. Integrating cutting-edge AI-driven tools contributes to the current discourse on AI applications for city management and planning by providing a replicable model for more cities and planning datasets to build upon and improve.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence</title>
<link href="https://hdl.handle.net/1721.1/164581" rel="alternate"/>
<author>
<name>Shen, ChenAn</name>
</author>
<id>https://hdl.handle.net/1721.1/164581</id>
<updated>2026-01-21T04:07:41Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Behavioral Responses to Congestion Pricing in New York&#13;
City: Mode Shift, Preference Change, and Effect Persistence
Shen, ChenAn
This thesis examines the behavioral impacts of New York City’s congestion pricing policy on weekday peak-hour travel into the pricing zone. Using a two-stage Bayesian Multinomial Logit framework applied to monthly aggregate mobility data, the study disentangles underlying preference shifts from observed mode share changes in response to the toll. Stage 1 estimates population-level travel sensitivities to cost and time, while Stage 2 uses a hierarchical structure to capture heterogeneity across demographic segments defined by income, age, and gender. The analysis spans January–June 2025 and compares results to the same months in 2024 as a counterfactual scenario without pricing. Findings show that while the policy generated a sustained mode shift away from private automobiles toward public transit, preference adaptation varied by demographic group and evolved over time. Some cohorts reinforced the intended policy effects through reduced transit travel time sensitivity, while others exhibited partial reversal as cost sensitivity shifted. These dynamic patterns underscore the importance of evaluating both immediate and evolving behavioral responses when designing congestion pricing strategies and highlight the value of aggregate behavioral modeling for timely, data-driven policy assessment.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model</title>
<link href="https://hdl.handle.net/1721.1/164579" rel="alternate"/>
<author>
<name>Gamble IV, James Monroe</name>
</author>
<id>https://hdl.handle.net/1721.1/164579</id>
<updated>2026-01-21T04:07:33Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Asset Limits, Savings Behavior, and Welfare: Evidence from the SIPP and a Life-Cycle Model
Gamble IV, James Monroe
This paper examines how asset limits in means-tested welfare programs shape household saving behavior. I exploit cross-state variation in Temporary Assistance for Needy Families (TANF) asset limits by linking these limits to individual-level data from the Survey of Income and Program Participation (SIPP) and estimating ordinary least squares (OLS) regressions with state and year fixed effects. I find that a $1 increase in the liquid asset limit corresponds to a $0.75 decrease in non-housing wealth among single mothers without a high school diploma. This suggests that less stringent asset tests reduce incentives to save, consistent with models in which more generous public insurance lowers the need for precautionary saving.&#13;
&#13;
To interpret these findings, I develop a dynamic life-cycle model of saving under income and medical expense risk, calibrated to key moments from the Hubbard, Skinner, and Zeldes framework. The model embeds Medicaid-style transfer rules and a guaranteed consumption floor. Simulations indicate that a $7,000 consumption floor can reduce median assets by up to 20% among low-education households, reflecting a decrease in self-insurance as public support increases. I then extend the model to include Achieving a Better Life Experience (ABLE) accounts, which are tax-advantaged savings vehicles for individuals with disabilities exempt from means testing. Simulations indicate that ABLE eligibility increases early-life consumption by approximately $10,000 and reduces retirement savings, with account holders shifting more spending into their working years. Together, these results yield a direct mapping from policy levers, including asset-limit generosity, earnings disregards, childcare subsidies, and ABLE exemption rules, to predicted shifts in median household assets. This offers policymakers a practical tool to balance public insurance and private precautionary savings.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease</title>
<link href="https://hdl.handle.net/1721.1/164574" rel="alternate"/>
<author>
<name>Burgos Robles, Emanuel Felipe</name>
</author>
<id>https://hdl.handle.net/1721.1/164574</id>
<updated>2026-01-21T04:08:09Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Strain-resolved transcriptomics: exploring functional heterogeneity of the gut microbiota in health and disease
Burgos Robles, Emanuel Felipe
The gut microbiome plays a critical role in inflammatory bowel diseases (IBDs), yet current analyses treat bacterial species as functionally uniform, ignoring extensive strain-level diversity that may drive disease mechanisms. Here, we developed a strain-resolved metatranscriptomics framework to investigate how transcriptional activity varies across bacterial lineages and relates to IBD pathogenesis. Using paired metagenomics and metatranscriptomics data from 1,067 fecal samples (103 IBD and 335 non-IBD patients), we first constructed phylogenetic trees for over 250 bacterial species using the single nucleotide variants within essential housekeeping genes, enabling the identification of bacterial strains. Next, we devised a statistical approach to assign mRNA reads to these strains, leveraging the natural genetic variation that is present across them. My analysis revealed that closely related bacterial strains exhibit dramatically different transcriptional programs, with some strains enriched in IBD patients showing upregulation of genes involved in stress response, sugar metabolism pathways, and antimicrobial resistance. Notably, we identified transcriptionally active but genomically low-abundance taxa, highlighting the importance of measuring the transcriptional activities of strains beyond species composition. Lineage-aware differential expression analysis uncovered strain-specific adaptations to inflammatory environments. This strain-resolved approach provides a powerful framework for understanding microbial functional heterogeneity and identifying specific bacterial lineages that could potentially contribute to disease pathogenesis, potentially guiding more targeted microbiome-based therapeutic interventions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment</title>
<link href="https://hdl.handle.net/1721.1/164571" rel="alternate"/>
<author>
<name>Xu, Bangjie</name>
</author>
<id>https://hdl.handle.net/1721.1/164571</id>
<updated>2026-01-21T04:08:08Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Large Language Models and Quantifying the Regulatory Expenses of Affordable Housing: A Thorough Examination Utilizing Generative Assessment
Xu, Bangjie
This thesis presents an innovative methodology using Large Language Model-based methods to extract and quantify housing regulations from municipal zoning codes, making possible the most comprehensive examination of regulatory costs at the municipal level across California to date. A multi-staged extraction framework is devised that delivers 85-95% accuracy in the identification and standardization of complex regulatory requirements from legal documents. Applying this methodology to over twenty California cities over the period 2015-2025, it is estimated that regulatory constraints raise the cost of developing a housing unit by roughly between 5% to 10% (or $50,000 and $100,000+) per housing unit, with the most acute constraints in the state’s coastal metros. This method is used to find that factors such as regulation costs limit housing supply elasticity from 1.24 in low-regulation jurisdictions to 0.08 in high-regulation areas. The LLM-based framework allows us to conduct analyses at an unprecedented scale and granularity and to reveal, for example, that the relaxation of regulation by streamlining policies like the Los Angeles Transit Oriented Communities program boosts housing production in eligible zoned areas by 43%. This study makes significant contributions to the restructuring of California’s housing regulation system in response to the affordability crisis, and its methodology presents a replicable tool for regulatory analysis in other policy domains.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston</title>
<link href="https://hdl.handle.net/1721.1/164564" rel="alternate"/>
<author>
<name>Murphy, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/164564</id>
<updated>2026-01-21T04:08:06Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Reimagining Public Land: Municipal Land Use to Ease Housing Crisis in Boston
Murphy, Ryan
Boston is in the midst of a severe housing crisis, driven by decades of underproduction, rising construction costs, restrictive zoning, and an inelastic real estate market that has resulted in persistent affordability challenges. This thesis explores the untapped potential of city-owned land as a powerful tool to increase housing supply and affordability in Boston. Using Boston’s 2022 Citywide Land Audit and detailed development assumptions, the analysis estimates that between 19,000 and 31,000 new housing units could be constructed across city-controlled parcels, including between 3,200 and 6,100 affordable units under the current Inclusionary Development Policy. The research draws on case studies from peer cities such as Chicago and Atlanta where municipal land has been successfully leveraged through transparent disposition processes, fast-tracked entitlements, and flexible affordability models. It argues for a policy shift in Boston toward a more streamlined, market-aware, and scalable land release strategy that prioritizes speed, cross-subsidization, and financial feasibility. Key recommendations include expanding the Welcome Home, Boston program to include mixed-income and rental housing, implementing predictable RFP cycles, offering tax abatements, and expediting the entitlement process.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modular Zipping for Transformable and Dynamic Systems</title>
<link href="https://hdl.handle.net/1721.1/164563" rel="alternate"/>
<author>
<name>Hagemann, Niklas</name>
</author>
<id>https://hdl.handle.net/1721.1/164563</id>
<updated>2026-01-21T04:08:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Modular Zipping for Transformable and Dynamic Systems
Hagemann, Niklas
There is a need for products, machines and environments that can change shape, transform and evolve according to their use. This thesis proposes the design of a simple, modular actuator based on reversible folding and interlocking (zipping) of flexible 3D printed strips. The proposed zipper design allows for continuous control states between a compact and fully deployed state. The modular actuators can be integrated into a variety of systems to enable compact, shape- and stiffness-changing structures, robots and other devices. Designs are presented for single- and double-zipper modules using the same basic zipper design. The modules can be used as modular components of compact robotic systems with the ability to expand and contract according to their environment, or used as adjustable structural components to create deployable, shape-and stiffness-changing objects. The zipper design points the way towards simplified mono-material components that embed transformation and reversibility into everyday devices, products and spaces, and enabling objects that are as easy to transform, reconfigure and reverse as they are to manufacture.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embodied Representation of Time in Virtual Reality</title>
<link href="https://hdl.handle.net/1721.1/164562" rel="alternate"/>
<author>
<name>Kim, Suwan</name>
</author>
<id>https://hdl.handle.net/1721.1/164562</id>
<updated>2026-01-21T04:08:03Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Embodied Representation of Time in Virtual Reality
Kim, Suwan
Recent advancements in 3D graphics and AI-assisted generative techniques have accelerated the creation of realistic scenes for immersive technologies, including virtual reality, yet most systems continue to encode time as a linear parameter, relying on timeline-based playback. Mesh-based representations are typically constrained by fixed topologies and rely on predefined animations, which limit their capacity to encode temporal change as a spatial or perceptual phenomenon. In reality, human experience of time is embodied and dynamic, perceived through interaction and memory. Existing digital systems fail to capture this dimension, reducing time to a passive parameter. This thesis proposes a framework for representing time as an embodied and spatial dimension within virtual reality by embedding it directly into the geometry and interaction logic of point cloud data. The system consists of three parts: (1) processing 2D images into layered volumetric point clouds to enable structural fluidity and temporally responsive spatial form; (2) enabling perceptual and spatial modulation in response to user distance and contact, with color influencing the character of change and opacity shaping its perceptual reveal at both global and local scales; and (3) enabling real-time visualization of modulated point cloud through a custom pipeline optimized for mobile virtual reality. By embedding temporal dynamics directly into geometry and interaction logic, this thesis contributes a novel representational approach to spatiotemporal modeling in immersive systems. By doing so, we create new opportunities for architectural visualization, interactive simulations, game design, and reimagining how we perceive and construct digital spaces.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants</title>
<link href="https://hdl.handle.net/1721.1/164560" rel="alternate"/>
<author>
<name>Yao, Randol H.</name>
</author>
<id>https://hdl.handle.net/1721.1/164560</id>
<updated>2026-01-21T04:08:04Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Patent Visibility and the Diffusion of Trapped Knowledge:&#13;
Evidence from US Grants
Yao, Randol H.
Valuable knowledge developed in one part of the world may remain “trapped" locally due to frictions in how knowledge is recognized and shared globally. This paper examines how granting US patents to foreign-origin inventions—by elevating their visibility and credibility— untraps the knowledge and facilitates global diffusion. Using examiner leniency as an instrument, complemented by a difference-in-differences design, I find that US grants of home country patents significantly increase both the likelihood and intensity of forward citations, including marked increases from third countries. A novel measure of “trappedness” reveals that knowledge from historically more trapped countries and sectors sees larger diffusion benefits after the US grants. These findings highlight the central role of the US as a platform of global knowledge recognition and diffusion, particularly in turning overlooked ideas into globally relevant innovations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon</title>
<link href="https://hdl.handle.net/1721.1/164559" rel="alternate"/>
<author>
<name>Rafferty, Lieutenant Commander Keefe</name>
</author>
<id>https://hdl.handle.net/1721.1/164559</id>
<updated>2026-01-21T04:07:59Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Characterization of the East China Sea Continental Shelf&#13;
Circulation Northeast of Taiwan Surrounding Mien-Hua Canyon
Rafferty, Lieutenant Commander Keefe
Submarine canyons have a proven and direct influence on continental shelf circulation and flow dynamics, especially in relation to western boundary currents. There are two key circulation features northeast of Taiwan on the East China Sea continental shelf: (1) the cold dome, a cyclonic feature that appears primarily in summer and is associated with upwelling, and (2) Kuroshio intrusions onto the continental shelf in the vicinity of Mien-Hua Canyon. This paper is a descriptive physical oceanography study with a focus on characterizing the circulation patterns northeast of Taiwan surrounding Mien-Hua Canyon, closely correlating these patterns with the migration of the Kuroshio and its variability and intrusions onto the southern East China Sea continental shelf, leading to the formation of the cold dome. The Institute of Oceanography at the National Taiwan University and WHOI executed a joint international field survey at Mien-Hua Canyon aiming to improve the understanding of canyon flow dynamics between the East China Sea continental shelf northeast of Taiwan and the Kuroshio as the North Pacific Gyre westward boundary current. This joint oceanographic expedition expands on previous joint US/Taiwan physical oceanographic and ocean acoustic studies in the China Seas dating back to ASIAEX in the South China Sea during 2000-2001 and QPE in the East China Sea during 2008-2009. The strengthening and weakening of Kuroshio transport and intensity northeast of Taiwan is closely correlated to the timescales of mesoscale westward propagating eddies arriving to the East Taiwan Channel. When a canyon has a Rossby number ~1 or Rossby radius equivalent to the width of the canyon in a region of left-bounded flow, induced cyclonic flow will experience an upwelling regime within the canyon system with dominant upwelling located at the downstream canyon rim vertically constrained by Rossby Height. Observational analysis of canyon bottom-moored ADCPs and vertical temperature arrays supports previous theory on submarine canyon dynamics on a continental shelf. Satellite sea surface temperature and absolute dynamic topography observations render the formation of a cold dome northeast of Taiwan coincident with this joint oceanographic survey.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities</title>
<link href="https://hdl.handle.net/1721.1/164558" rel="alternate"/>
<author>
<name>Roh, Soohyun</name>
</author>
<id>https://hdl.handle.net/1721.1/164558</id>
<updated>2026-01-21T04:08:01Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">From Wallets to Wages: Consumer Income, Job Design, and Pay Disparities
Roh, Soohyun
Pay differences between organizations are a key source of wage inequality. I propose a novel account of these differences by starting from the consumers that these businesses serve. Firms that serve high-income consumers specialize jobs into higher-paying and higher-skilled positions focused on quality, while those that serve lower-income consumers emphasize cost minimization by requiring workers to perform a wider range of general tasks. Matching consumer foot traffic data and establishment-level wage records, I find that establishments serving higher-income consumers pay their workers more. This effect holds comparing among establishments in the same neighborhoods and industries. Longitudinally, establishments increase wages when they shift toward higher-income customers. Analysis of online job postings further reveals that jobs at higher-income-serving firms involve a narrower set of tasks that command higher market value. These findings show how consumer markets shape firms’ internal job design and contribute to pay inequality across organizations.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.</title>
<link href="https://hdl.handle.net/1721.1/164557" rel="alternate"/>
<author>
<name>Mulcahy, Robby L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164557</id>
<updated>2026-01-21T04:07:56Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">The Fate of Federal Buildings: Real Estate Disposition and the Future of Washington, D.C.
Mulcahy, Robby L.
The United States federal government is the largest property owner in the country, with more than 370 million square feet of real estate under its control. Much of this portfolio is outdated, underutilized, and located in the urban cores of American cities. Nowhere is this more evident—or more consequential—than in Washington, D.C., where the federal government controls approximately 27% of the office market. As federal agencies adopt hybrid work models, and as the operational needs of government evolve, the existing real estate footprint has become increasingly inefficient, expensive, and misaligned with civic and market realities. This thesis investigates the opportunity to rethink federal land ownership and management as a catalyst for urban regeneration, civic stewardship, and housing production.&#13;
&#13;
Using the James V. Forrestal Building as a focal case study, the research examines the historical, policy, and spatial dynamics that have led to the current moment of reckoning. Located on Independence Avenue SW, straddling 10th Street between the National Mall and the Wharf, Forrestal is emblematic of the postwar federal design ethos: monumental, inward-facing, and hostile to street life. Once a symbol of bureaucratic permanence, the building now stands as a physical and symbolic barrier to urban connectivity and civic vitality. The case of Forrestal is used to explore broader questions: How can the federal government dispose of surplus property more effectively? What policy tools exist—or are needed—to unlock value and enable redevelopment? And what role should cities play in shaping the outcomes of federal land disposition?&#13;
&#13;
The thesis employs a mixed-methods approach that includes policy analysis, stakeholder interviews, precedent case studies, and spatial analysis of Southwest D.C. The work identifies a range of obstacles to effective disposition, including Title V of the McKinney-Vento Homeless Assistance Act, opaque OMB budget scoring rules, jurisdictional fragmentation, and the absence of a coordinating authority across federal agencies. It also identifies key lessons from successful projects such as The Yards, Walter Reed, and the Volpe Center, where thoughtful structuring and strong federal-local partnerships enabled transformative redevelopment of surplus land.&#13;
&#13;
The thesis concludes with ten detailed recommendations for reform, including reauthorization of the Federal Assets Sale and Transfer Act (FASTA), modernization of Title V and OMB scoring, the creation of Federal Redevelopment Zones, and the prioritization of housing, civic infrastructure, and design quality in disposition strategy. It argues that the federal government must shift from a passive landlord to an active steward of public land—one that collaborates with cities, integrates public benefit, and reflects democratic values through the built environment.&#13;
&#13;
In this moment of shifting federal needs, declining office demand, and urban transformation, the question is not whether federal real estate reform is needed—it is whether we will seize the opportunity. The fate of buildings like Forrestal will shape not only the skyline of Washington, D.C., but also the federal government’s legacy in America’s cities for generations to come.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyst Incentives</title>
<link href="https://hdl.handle.net/1721.1/164556" rel="alternate"/>
<author>
<name>Green, Brice</name>
</author>
<id>https://hdl.handle.net/1721.1/164556</id>
<updated>2026-01-21T04:08:00Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Analyst Incentives
Green, Brice
Analyst forecasts have been shown to reflect substantial behavioral biases and predict a number of macroeconomic phenomena. While we typically treat reported forecasts as statistical expectations, under uncertainty the reported point estimate will be sensitive to the payoff structure facing the forecaster. Using data on careers from LinkedIn, I describe the incentive structures faced by analysts, shedding light the extent to which pay and career success are tied to performance. Further, I extend a causal estimator to identify credible counterfactual forecasts and provide tentative causal evidence of the relationship between forecast errors and promotions.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?</title>
<link href="https://hdl.handle.net/1721.1/164555" rel="alternate"/>
<author>
<name>Chomik-Morales, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/164555</id>
<updated>2026-01-21T04:07:57Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Maverick Neuroscientist: What Does a Life in Science Look Like Outside the Ivory Tower? : Dr. Eugenio Vargas-Pena is a renowned psychiatrist in Paraguay, who conducts neuroscience research without university ties, funding, or peer review. ls his embodiment of the gentleman scientist an alternate path for those who want to break away from institutionalized science?
Chomik-Morales, Jessica
This longterm narrative investigates the life and work of Dr. Eugenio Vargas-Peña, a neuropsychiatrist in Asunción, Paraguay who built a fully functional lab in his countryside home. Vargas-Peña conducts brain research independently, guided by decades of self-study, clinical practice, and an unwavering belief in the value of curiosity-driven inquiry. The piece interweaves historical context, character study, and personal narrative, using the author's own background in neuroscience and science communication to frame an inquiry into legitimacy, recognition, and alternative pathways in science. It asks: What defines a scientist today? Who gets to decide which ideas are taken seriously? And what are the consequences-creative or catastrophic-of working outside institutional boundaries? Through the lens of one man's eccentric yet earnest intellectual journey, this thesis invites broader reflection on the pressures shaping contemporary research and the enduring romance of unorthodox scholarship.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution</title>
<link href="https://hdl.handle.net/1721.1/164508" rel="alternate"/>
<author>
<name>Elsabbagh, Fares</name>
</author>
<id>https://hdl.handle.net/1721.1/164508</id>
<updated>2026-01-13T04:08:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Accelerating RTL Simulation Through Fine-grained Task Dataflow and Selective Execution
Elsabbagh, Fares
Fast simulation of digital circuits is crucial to build modern chips. Current processors and SoCs integrate hundreds of complex components, including cores, accelerators, and memory hierarchies. Simulating these systems is necessary to verify correctness and explore the design space. Simulation can happen at different levels of abstraction. In this work we focus on Register-Transfer-Level (RTL) simulation. While RTL simulators are frequently used in development due to their quick compilation times, their runtime performance is slow. This is because as the designs are scaled up, multicore communication and scheduling overheads limit performance and scalability.&#13;
&#13;
We present ASH, a parallel architecture tailored to RTL simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. ASH hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs that represent different types of architectures. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Task Scheduling Techniques to Accelerate RTL Simulation</title>
<link href="https://hdl.handle.net/1721.1/164507" rel="alternate"/>
<author>
<name>Sheikhha, Shabnam</name>
</author>
<id>https://hdl.handle.net/1721.1/164507</id>
<updated>2026-01-13T04:08:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Task Scheduling Techniques to Accelerate RTL Simulation
Sheikhha, Shabnam
Fast simulation of digital circuits is crucial to build modern chips. Slow simulation lengthens chip design time and makes bugs more frequent. While simulation can happen at different levels of abstraction, Register-Transfer-Level (RTL) simulation is the usual bottleneck in chip design, as it is needed for ongoing debugging and evaluation. Current simulators scale poorly across CPU cores, because they are unable to exploit the fine-grained parallelism inherent in simulation workloads.&#13;
&#13;
We present ASH, a parallel architecture tailored to simulation workloads. ASH consists of a tightly codesigned hardware architecture and compiler for RTL simulation. ASH exploits two key opportunities. First, it performs dataflow execution of small tasks to leverage the fine-grained parallelism in simulation workloads. Dataflow execution exposes abundant parallelism, as each task can run as soon as its inputs are available. Second, it performs selective event-driven execution to run only the fraction of the design exercised each cycle, skipping ineffectual tasks. Selective execution introduces dynamic data dependences since skipped tasks do not communicate data. ASH employs speculative execution to handle these dependencies. ASH’s hardware provides a novel combination of dataflow and speculative execution, and ASH’s compiler features several novel techniques to automatically leverage this hardware. The key compiler techniques include a novel partitioning for minimizing data communication while maintaining load balance, and a strategic coarsening mechanism to reduce the overheads of fine-grained tasks.&#13;
&#13;
We evaluate ASH in simulation using large Verilog designs. With 256 simple cores, ASH is gmean 1,485× faster than 1-core Verilator, and it is 32× faster than Verilator on a server CPU with 32 complex cores while using 3× less area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility</title>
<link href="https://hdl.handle.net/1721.1/164506" rel="alternate"/>
<author>
<name>Baum, Amelia Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/164506</id>
<updated>2026-01-13T04:08:29Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Scheduling Strategies for Bus Operator Retention: A Mixed-Methods Evaluation of Bus Operator Preferences and 4-Day Workweek Feasibility
Baum, Amelia Rose
Public transit agencies face significant and growing challenges related to workforce shortages, absenteeism, and employee retention, which threaten service reliability. Reports found that 90% of U.S. transit agencies are experiencing a workforce shortage, with 84% claiming that the shortage affects their ability to provide scheduled service. Industry-wide, operator absence is a significant contributor to missed work at transit agencies nationwide and has, in many cases, delayed the full reinstatement of service at transit agencies following the COVID-19 pandemic. The quality of bus operators' work is significantly impacted by inflexible crew scheduling constraints. However, most studies focus on pay, benefits, and infrastructure, neglecting the importance of scheduling. This thesis aims to fill this gap by examining the potential for crew scheduling improvements to enhance the quality of life for bus operators through a three-part case study at the Chicago Transit Authority. Part 1 analyzes the historical work preferences of CTA bus operators, providing actionable insights for scheduling improvements. Part 2 presents a high-fidelity proof of concept in HASTUS, using block schedules (10-hour-a-day runs that are intended to be run by an operator 4 days a week) and rostering to reduce negative work traits, increase consecutive and weekend days off for most operators, while maintaining schedules for the top 20% of senior operators. Part 3 evaluates the new 10-hour, 4-day-per-week packaged schedules via an LLM-based paired alternatives survey of operators at one CTA garage, measuring the desirability of the proof of concept and collecting qualitative feedback. Overall, the new schedules substantially improve the quality of work for operators by guaranteeing at least one weekend day off, at least two consecutive days off, and increasing day-to-day schedule consistency and overnight rest time, while maintaining constant vehicle requirements and total pay hours. The survey results show that 72% of operators at the 74th Street garage support the new schedule paradigm, demonstrating strong support for their potential adoption and encouraging future exploration of a block schedule hybrid rostering paradigm at the CTA and other transit agencies.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sampling Methods for Fast and Versatile GNN Training</title>
<link href="https://hdl.handle.net/1721.1/164495" rel="alternate"/>
<author>
<name>Alkhatib, Obada</name>
</author>
<id>https://hdl.handle.net/1721.1/164495</id>
<updated>2026-01-13T04:08:28Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Sampling Methods for Fast and Versatile GNN Training
Alkhatib, Obada
Graph neural networks (GNNs) have become a commonly used class of machine learning models that achieve state-of-the-art performance in various applications. A prevalent and effective approach for applying GNNs on large datasets involves mini-batch training with sampled neighborhoods. Numerous sampling algorithms have emerged, some tailored for specific GNN applications. In this thesis, I explore ways to improve the efficiency and expressivity of existing and emerging sampling schemes. &#13;
&#13;
First, I explore system solutions to facilitate the development of fast implementations of different sampling methods. I introduce FlexSample, a system for efficiently incorporating custom sampling algorithms into GNN training. FlexSample leverages the types of performance optimizations found in SALIENT, a state-of-the-art system for fast training of GNNs with node-wise sampling. In experiments with 4 GNN models which use layer-wise and subgraph sampling, FlexSample achieves up to 1.3× speed-up for end-to-end training over PyTorch Geometric with the same sampling code. Furthermore, FlexSample extends SALIENT with highly-optimized C++ implementations of FastGCN and LADIES layer-wise sampling, which achieve 2×–5× speed-up over their respective Python implementations.&#13;
&#13;
Second, I introduce a novel framework for learning neighbor sampling distributions as part of GNN training. Key components of this framework, which I name PertinenceSample, are: (i) a differentiable approximation of node-wise sampling for GNNs; and (ii) a parametrization of node sampling distributions as node- or edge-wise weights of attention-like GNN layers. I present an initial exploration of the potential of PertinenceSample for improving node classification accuracy in the presence of noisy edges. Specifically, in two synthetic experiments where roughly half of a node’s neighbors may have similar features but different labels, I demonstrate that extending a GraphSAGE model with a 2-layer perceptron for learning the PertinenceSample weights can improve classification accuracy from 50%–75% to (nearly) 100%.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting</title>
<link href="https://hdl.handle.net/1721.1/164487" rel="alternate"/>
<author>
<name>Murzynowski, Philip</name>
</author>
<id>https://hdl.handle.net/1721.1/164487</id>
<updated>2026-01-13T04:08:27Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Optimizing Graph Neural Network Training on Large Graphs in A Distributed Setting
Murzynowski, Philip
Graph neural networks (GNNs) are an important class of methods for leveraging the information present in graph structures to perform various learning tasks. Distributed GNNs can improve the performance of GNN execution by dividing computation among multiple machines and scale to large graphs by partitioning graph features and the graph structure. Although distributed GNNs are able to achieve self-relative speedup, they are often slower than well-optimized code running on a single machine. For example, evaluation of the prevalent Distributed DGL system on graphs in the Open Graph Benchmark shows Distributed DGL can achieve speedup of over 2× when moving from one to four nodes, but execution of Distributed DGL on 4 nodes is 2× slower than a well-optimized GNN system, such as the SALIENT system, on a single machine.&#13;
&#13;
In my thesis, I argue that it is possible for a distributed GNN system to be both fast and scalable. Specifically, I show that it is possible to match the performance of well-optimized, non-distributed codes for GNN training and also achieve good scalability when running in the distributed setting. I present a system called Distributed SALIENT and motivate its design through profiling and identifying bottlenecks that arise in the distributed setting. Key components of Distributed SALIENT include the use of well-optimized code for local computations, pipelining of inter-machine communication, and a careful trade-off between data partitioning and partial replication.&#13;
&#13;
I evaluate Distributed SALIENT on the Open Graph Benchmark (OGB) and show that Distributed SALIENT achieves good speedup compared to SALIENT’s well-optimized single-node code while only using replication factors of roughly 5%. In fact, in experiments with training a 3-layer GraphSAGE model on the large OGB papers100M data set, Distributed SALIENT on 8 nodes is 8.6x faster than SALIENT on 1 node.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park</title>
<link href="https://hdl.handle.net/1721.1/164480" rel="alternate"/>
<author>
<name>Zhao, Celina</name>
</author>
<id>https://hdl.handle.net/1721.1/164480</id>
<updated>2026-01-13T04:08:22Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">When Your Home Becomes a Panda Park: The Opportunity and Upheaval of China's Giant Panda National Park
Zhao, Celina
In December 2016, China launched the Giant Panda National Park (GPNP). A massive ecological initiative aimed at safeguarding its beloved national symbol and international icon of conservation, the park marked an unequivocal win for giant pandas. But for the 100,000 people already living in and around the borders, the outcome was not as clear. &#13;
The GPNP seeks to establish a harmonious balance between biodiversity protection and human development. But the vast amount of land covered by the park means not all places are equally primed to achieve that goal. A handful of communities have been designated as exclusive entrance communities, with lavish funding to become the face of the national park. In others, a persistent question simmers: Are pandas more important than people? &#13;
Central to this story is how individuals are adapting to and reimagining their futures. Rather than a binary of winners and losers, the GPNP has sparked a wide range of human responses -showing that the path to a sustainable future between people and pandas is far from black and white.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reviewing I.S. : how to handle legacy systems?</title>
<link href="https://hdl.handle.net/1721.1/164457" rel="alternate"/>
<author>
<name>Orlando, Ricardo,
            1966-</name>
</author>
<id>https://hdl.handle.net/1721.1/164457</id>
<updated>2026-01-07T03:23:47Z</updated>
<published>1999-01-01T00:00:00Z</published>
<summary type="text">Reviewing I.S. : how to handle legacy systems?
Orlando, Ricardo,
            1966-
Thesis: S.M.M.O.T., Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 1999; Includes bibliographical references (leaves 100-106).
</summary>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors</title>
<link href="https://hdl.handle.net/1721.1/164456" rel="alternate"/>
<author>
<name>Trapp, Donald L.</name>
</author>
<id>https://hdl.handle.net/1721.1/164456</id>
<updated>2026-01-07T03:23:33Z</updated>
<published>1962-01-01T00:00:00Z</published>
<summary type="text">The effects of changing economic conditions on energy costs in stainless-steel clad pressurized water reactors
Trapp, Donald L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1962; Appendix contains numerous pamphlets.; Includes bibliographical references (leaves 135-136).
</summary>
<dc:date>1962-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design of a control system for the terminal phase of a satellite rendezvous</title>
<link href="https://hdl.handle.net/1721.1/164454" rel="alternate"/>
<author>
<name>Hollister, Walter M.,
            1930-</name>
</author>
<id>https://hdl.handle.net/1721.1/164454</id>
<updated>2026-01-07T03:23:50Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">The design of a control system for the terminal phase of a satellite rendezvous
Hollister, Walter M.,
            1930-
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 47).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Noise analysis of circuit models representing maser operation.</title>
<link href="https://hdl.handle.net/1721.1/164451" rel="alternate"/>
<author>
<name>Hempstead, Robert Douglas.</name>
</author>
<id>https://hdl.handle.net/1721.1/164451</id>
<updated>2026-01-07T03:23:54Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Noise analysis of circuit models representing maser operation.
Hempstead, Robert Douglas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1965; Bibliography: leaves 106-108.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard</title>
<link href="https://hdl.handle.net/1721.1/164448" rel="alternate"/>
<author>
<name>Ferguson, William Lloyd.</name>
</author>
<id>https://hdl.handle.net/1721.1/164448</id>
<updated>2026-01-07T03:23:43Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Improving railroad terminal control systems : a case study of Southern Railway's Brosnan yard
Ferguson, William Lloyd.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1979; Bibliography: leaves 194-195.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Atmospheric Impacts of Hydrogen as an Aviation Fuel</title>
<link href="https://hdl.handle.net/1721.1/164348" rel="alternate"/>
<author>
<name>Gibney, Evan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/164348</id>
<updated>2025-12-17T03:06:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Atmospheric Impacts of Hydrogen as an Aviation Fuel
Gibney, Evan M.
Hydrogen is being investigated as a promising zero-carbon aviation fuel, offering the potential to eliminate direct CO₂ emissions while being produced with low lifecycle greenhouse gas emissions. Despite these benefits, there are additional indirect climate and air quality costs associated with direct hydrogen emissions which are often overlooked. We quantify the perturbation in the atmospheric composition associated with the introduction of hydrogen-fueled aircraft, broadening the current understanding of the non-CO₂ effects of these fleets. We use the GEOS-Chem High Performance (GCHP) global chemistry-transport model to conduct a spatially discretized, multi-year impact assessment of the atmospheric impacts of hydrogen-fueled aviation. We implement a flux surface boundary condition for hydrogen to provide an improved representation of the soil sink, relative to the default fixed boundary condition. This results in a net surface exchange of-16.7 Tg H₂ per year. Two hydrogen scenarios are evaluated using the updated GCHP implementation, which are representative of a high and low mitigation scenario for direct hydrogen emission rates. For the two scenarios, respectively, we observe increases in the mean atmospheric methane mixing ratio of 3.34 ppbv and 10.7 ppbv, corresponding to an increased methane lifetime of between 0.24% and 0.77%, respectively. The increased methane lifetime as well as in-situ oxidation of stratospheric hydrogen results in an increased stratospheric water vapor burden of 0.42 Tg and 2.3 Tg (or 0.052% and 0.28%) for the high and low mitigation scenarios, respectively. Additionally, we show the perturbation to tropospheric ozone levels to be between-0.047% and +0.30%, where the decreased ozone results from the removal of NOₓ emissions associated with fuel cells and low hydrogen emission rates. This analysis provides the foundation for understanding the implications of potential future hydrogen-based aviation fleets on climate and air quality.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet</title>
<link href="https://hdl.handle.net/1721.1/164347" rel="alternate"/>
<author>
<name>Ocharoenchai, Nanticha</name>
</author>
<id>https://hdl.handle.net/1721.1/164347</id>
<updated>2025-12-17T03:06:34Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">A Changing Climate Beneath Our Feet: How plant and microbial life in tropical soils are shifting and what that could mean for the future of our warming planet
Ocharoenchai, Nanticha
Discussions about climate change and carbon sequestration have largely revolved around plant structures we can easily see, like leaves that absorb CO₂ for photosynthesis and woody trunks that store carbon as biomass. Carbon credits that companies and consumers buy to compensate for emissions they’ve produced are primarily calculated based on these parts, as are models that predict climate change impacts. But researchers are now beginning to understand that what we see aboveground is only part of the equation. The other part lies beneath our feet in an intricate, expansive, covert realm where plant roots, microbial communities and soil dynamics interact. These belowground systems are crucial for cycling carbon through the Earth and regulating the climate, but relatively little is known about them compared to aboveground systems. This is especially true in tropical regions, where one-third of the world’s terrestrial carbon storage lies. However, these systems are evolving quickly with climate change, contradicting what models have previously projected. With so many global decisions based on such models, these uncertainties hold planetary significance for our future. A group of scientists is climbing an uphill battle, racing against time to understand this understudied field.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Winter</title>
<link href="https://hdl.handle.net/1721.1/164346" rel="alternate"/>
<author>
<name>White, Mackenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/164346</id>
<updated>2025-12-17T03:06:43Z</updated>
<published>2025-09-01T00:00:00Z</published>
<summary type="text">Engineering Winter
White, Mackenzie
As winters warm and snowfall becomes less reliable, ski resorts worldwide increasingly depend on artificial snow to stay open. Snowmaking, once a stopgap, has become the backbone of entire seasons in a sprawling choreography of pumps and pressurized mist designed to hold trails together. At resorts like Vermont’s Bromley Mountain, snowmakers work through the night, drawing millions of gallons from limited reservoirs and operating within narrowing windows of cold air. What emerges is a portrait of winter in transition: less predictable, more expensive, increasingly manufactured. The efforts to preserve winter recreation carry growing costs in energy, water, and equitable access. Many smaller, independent ski areas struggle to meet the demands of climate adaptation, while larger resorts expand their operations, widening the divide in who can afford to sustain operations. In the American West, where rivers depend heavily on snowpack melt, the spread of snowmaking ties winter recreation to a water system already under immense strain. As artificial snow becomes the norm, winter is increasingly a season bought, built, and rationed, raising the question of whether attempts to keep the season alive are accelerating the changes that threaten to erase it.
</summary>
<dc:date>2025-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>IP Networks Over Heterogeneous Embedded Serial Links</title>
<link href="https://hdl.handle.net/1721.1/164271" rel="alternate"/>
<author>
<name>Perry, Nathan</name>
</author>
<id>https://hdl.handle.net/1721.1/164271</id>
<updated>2025-12-11T03:08:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">IP Networks Over Heterogeneous Embedded Serial Links
Perry, Nathan
The Internet Protocol (IP) provides a number of key benefits to networked devices: it serves as a "narrow waist" enabling functional modularity by decoupling lower-layer devices from application behavior, it provides a notion of transitive connectivity and a number of standardized methods to achieve it, and most importantly, it is ubiquitous, enabling almost all networked applications to mutually communicate.&#13;
&#13;
Many embedded microcontrollers cannot take advantage of the benefits of IP because they lack the dedicated networking hardware which is as a practical matter required to interact with nontrivial networks. I observe that multihop point-to-point IP networks can in principle be constructed over the communication media that microcontrollers commonly do have, such as UARTs, I2C, SPI, and CAN bus, but software support is lacking to make this networking approach accessible.&#13;
&#13;
Therefore, this thesis develops and evaluates interstice, a platform-independent, open-source software library designed to enable the flexible implementation of modular packet forwarders in userspace. It can be used to interconnect devices and their IP stacks across a variety of conventional&#13;
and unconventional links. Interstice exposes a reprogrammable, dynamically-updatable packet-forwarding strategy, enabling forwarder nodes in principle to act as hubs, bridges, full routers, or implement firewalls or NAT, as application requirements and platform constraints permit.&#13;
&#13;
This approach enables benefits for modular, networked systems of microcontrollers which need to talk to the outside world: using IP enables internal microcontrollers to communicate with external devices such as PCs and smartphones without the need for application gateways. Further, to the extent that such networks are runtime-reconfigurable, features of IP such as address assignment, dynamic routing, and link-agnosticity can be incredibly beneficial.&#13;
&#13;
Interstice is evaluated here primarily against networks of various types of serial links (UART, I2c, CAN) speaking PPP, selected to demonstrate utility of the approach to connect embedded devices lacking dedicated networking peripherals, and further that link drivers can be specialized to take advantage of the specific characteristics of each link. The approach is showcased in application scenarios including a networked milling machine, and is analyzed for a number of performance metrics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives</title>
<link href="https://hdl.handle.net/1721.1/164270" rel="alternate"/>
<author>
<name>Li, Yuqing Lucy</name>
</author>
<id>https://hdl.handle.net/1721.1/164270</id>
<updated>2025-12-11T03:08:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">BioLIG: Designing Biologically Derived Electronics and Their Speculative Lives
Li, Yuqing Lucy
Imagination is the origin of reality. Cultivating new infrastructural and ecological imaginaries is crucial to addressing the climate crisis. Where is the space to prototype new social and technological relations? Transient electronics is an emerging field in advanced materials focused on making electronics that don’t last. Devices are designed to be transient for biomedical, environmental monitoring, or energy storage applications. It is a fascinating and unconventional direction that advances the area of biocompatibility, redefining waste and time-programmable decay {Making electronics that, 2022}. However, in a manufacturing system that fundamentally favors the inert and invariant, transient properties can be precisely the qualities that make adaptation most challenging, often failing at the very stage of imagination. Taking inspiration from transient electronics, this thesis consists of a set of novel biomaterials, a workflow, and three fictional stories to enrich our imagination and instill agency amidst entangled humanitarian, ecological, and technological crises. BioLIG is a material for prototyping accessible and compostable electronics. It uses laser-induced graphene as an organic, bio-derived conductor and affordable biomaterials as the substrate. Three sheets and two inks make up a toolkit to create biocomposites with different properties, colors, and textures specifically designed for prototyping sensors and circuits with transient behaviours. Through a series of characterisations, BioLIG is evaluated and demonstrates that with one material, its electrical performance is on par with synthetic substrates. However, the goal is not to create a replacement material but to prototype new social and technological relations to transient materials. Through a questionnaire, I collected stories, ideas, and questions from makers, designers, and artists for BioLIG and used those as the basis for imagination. In a speculative house, on three floors, three stories unfold of a hoarder, a city forester, and a family living in a time with a leap in our relationship to fabrication, to electronics, and to decay.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches</title>
<link href="https://hdl.handle.net/1721.1/164266" rel="alternate"/>
<author>
<name>Justen, Lennart J.</name>
</author>
<id>https://hdl.handle.net/1721.1/164266</id>
<updated>2025-12-11T03:08:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Advancing Biosecurity in the Age of AI: Integrating Novel Detection, Suppression, and Evaluation Approaches
Justen, Lennart J.
Civilization confronts a growing challenge: advancing transformative biological science while safeguarding against catastrophic misuse, a tension amplified by the rapid convergence between biology and artificial intelligence. The COVID-19 pandemic starkly revealed our vulnerabilities to self-replicating, exponential biological phenomena, yet current defenses remain dangerously inadequate—often blind to novel pathogens until too late and lacking barriers against rapid airborne transmission. This thesis argues that robust biosecurity enables, rather than hinders, progress, and advances three key defensive capabilities. First, it evaluates blood metagenomics for pathogen-agnostic surveillance, reanalyzing public datasets to quantify viral signatures and guide the implementation of much-needed early-warning systems sensitive to novel pathogens. Second, it advances far-UVC, a type of ultraviolet between 200-235 nm, for continuous indoor air disinfection, critically assessing its safety profile through an international expert review and establishing research priorities essential for deploying this vital physical defense against airborne threats. Third, it develops rigorous methodologies for evaluating AI's rapidly evolving biological capabilities, benchmarking frontier models across diverse tasks to track progress, reveal limitations in current assessments, and guide responsible innovation in this powerful dual-use technology. Collectively, these contributions help accelerate technologies to mitigate biological risks, thereby helping secure the conditions for continued, beneficial advancement of biology in the age of AI.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164265" rel="alternate"/>
<author>
<name>Poole-Dayan, Elinor</name>
</author>
<id>https://hdl.handle.net/1721.1/164265</id>
<updated>2025-12-11T03:08:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Dialogue to Decision: An LLM-Powered Framework for Analyzing Collective Idea Evolution and Voting Dynamics in Deliberative Assemblies
Poole-Dayan, Elinor
Deliberative assemblies—representative samples of citizens engaged in collective decision-making through facilitated learning and deliberation—are increasingly recognized as powerful tools for revitalizing democratic governance. Yet, core aspects of how deliberation shapes which ideas advance, how perspectives evolve, and why certain recommendations succeed remain opaque and underexamined. This thesis addresses these gaps by investigating: (1) How might we trace the evolution and distillation of ideas into concrete recommendations within deliberative assemblies? and (2) How does the deliberative process shape delegate perspectives and influence voting dynamics over the course of the assembly?&#13;
&#13;
&#13;
To answer these questions, I develop LLM-based methodologies for empirically analyzing transcripts from a tech-enhanced student deliberative assembly. The first framework identifies and visualizes the space of expressed suggestions, revealing that seemingly large gaps between ideas and final recommendations often reflect productive deliberative filtering—while also surfacing overlooked viable ideas.&#13;
A second analysis integrates post-assembly survey data with transcript-grounded voting patterns to uncover the primary drivers of vote change: edits to recommendations, evolving opinions, and strategic shifts in response to updated priorities. Building on this, I introduce a framework for reconstructing each delegate’s evolving stance across the assembly, linking shifts in perspective to specific deliberative moments and justifications.&#13;
&#13;
Together, these methods contribute novel empirical insight into deliberative processes and demonstrate how LLMs can surface high-resolution dynamics otherwise invisible in traditional assembly outputs. The findings lay groundwork for new tools that support facilitators and delegates during live assemblies, improve transparency for decision-makers, and elevate ideas that may otherwise be missed.&#13;
&#13;
Looking ahead, this work opens pathways for comparative research across assemblies and highlights the potential for human-centered AI to meaningfully enhance deliberative democratic practice. As societies seek new modes of participatory governance amid growing polarization and institutional mistrust, tools that strengthen deliberation without compromising its core human character are urgently needed.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies</title>
<link href="https://hdl.handle.net/1721.1/164262" rel="alternate"/>
<author>
<name>Wong, Wing Cheung Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/164262</id>
<updated>2025-12-11T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Delibrary: From Discussion to Outcomes and Back(casted) Again, a Visualization Tool for Deliberative Assemblies
Wong, Wing Cheung Michael
With trust in traditional democratic institutions waning, it is increasingly important to examine how potential new institutions could be created and bolstered, with particular emphasis on restoring trust and empowering the public. One potential solution, the citizen's or deliberative assembly, can serve to bridge the governance and legitimacy gap between real-world policy decision-making processes and citizen-driven impact by leveraging random sortition and a well-designed deliberation process. In this thesis, I explore how AI-driven sensemaking via GPT4o-mini--a Large-Language Model (LLM)--synthesized with custom-built visualization tools, can potentially reveal the dynamics within citizen deliberative assemblies where representative, randomly selected citizens navigate public interest issues through facilitated deliberation--and how such tools can serve to amplify transparency within both the assembly process itself and the issues they explore. Through building three different prototype visualization frameworks and the development of an AI-powered topic identification process called backcasting, I analyze novel datasets from two tech-enhanced assemblies; fully recorded discussions from both an on-the-ground citizens' assembly in Deschutes County, Oregon, as well as an MIT student assembly on sustainability. In backcasting, assembly outcomes are linked to transcriptions of assembly discussions via LLM tagging, uncovering what, when, who, and where participants deliberate about topics that eventually become proposals/recommendations/outcomes. Furthermore, I analyze the sentiment with which an assembly delegate presented their view on a certain recommendation (agreement, disagreement, etc.) in addition to the supporting reasoning patterns this delegate used to express their view, if any (e.g. whether they draw from personal experience, reference outside expertise, etc.). To evaluate the final prototype tool, I interview subject matter and assembly experts, assembly organizers/facilitators, as well as assembly delegate members to assess the potential and drawbacks of this visualization tool and AI sensemaking backbone. Positive feedback obtained from these user studies include the clear potential for research, narrative building, and facilitation improvement, in addition to greater perceived transparency into the workings of an assembly process. Further work is still needed, however, to address significant lingering issues, such as adjusting presentation to better serve specific use cases and to reduce complexity and confusion, the most referenced drawback of Delibrary. Overall, my thesis aims to \textbf{build transparent insights into the human-led structures of assemblies, enabling relevant stakeholders--from delegates, policy makers, to the general public--to achieve a better understanding of the assembly process and engender legitimacy perception by illustrating that delegates drawn from all walks of life do have meaningful voice in an impactful process}. By helping to promote this understanding and perception of legitimacy of an effective and respectful deliberation process, I strive to ultimately help scaffold healthier democratic decision-making.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive Storybooks for Early AI Literacy</title>
<link href="https://hdl.handle.net/1721.1/164170" rel="alternate"/>
<author>
<name>Pu, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/164170</id>
<updated>2025-12-04T03:09:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interactive Storybooks for Early AI Literacy
Pu, Isabella
As artificial intelligence (AI) becomes increasingly present in children's everyday environments, there is an urgent need for developmentally appropriate tools that help young learners understand and shape these technologies. To be effective, these tools must not only successfully convey complex concepts but also engage children in ways that are meaningful, accessible, and fun.&#13;
&#13;
This thesis introduces the Interactive Storybooks for Early AI Literacy, a series of ten interactive storybooks for children ages 6–9 that combine narrative, mini-games, and scaffolded creative AI interactions to teach core AI and robotics concepts. The storybooks follow an overarching narrative featuring a friendly robot, Doodlebot, who must learn creative tasks with the child's help, framing the child as an AI designer and introducing them to the concept of training AI models through the narrative. The storybooks additionally contain interactive games and activities which help keep kids excited and engaged, while providing structured opportunities to experiment with and explore AI creation tools.&#13;
&#13;
First, a pilot study was conducted at a community summer camp with four Interactive Storybooks. Children expressed joy and pride in their AI creations, used the characters as emotional anchors for learning, and began to successfully articulate key AI concepts. Four engagement archetypes emerged: the Reader, the Gamer, the Showcaser, and the Social Connector, each representing a distinct way children interacted with the storybooks. However, despite behavioral signs of engagement, many children described the narrative portions as boring and claimed to prefer games.&#13;
&#13;
To explore this tension, a home deployment study compared two versions of the system: a "Books" condition with the full narrative and a "Games" condition with only instructional text. Both conditions included the same mini-games and AI interactions. While children in both groups reported similar levels of enjoyment, those in the Books condition showed significantly higher learning gains, greater increases in perceived knowledge and confidence, and stronger connections to the characters. Children in the Books condition also more frequently referenced the narrative when describing AI concepts and demonstrated more creative and iterative behavior during and after gameplay.&#13;
&#13;
Overall, these findings suggest that combining storytelling, gameplay, and creative AI interactions is an effective and engaging approach to teaching AI and robotics to young children. Narrative context appears to support concept recall, deepen emotional investment, and promote thoughtful experimentation, even with complex concepts for this age group, like AI and robotics. Based on insights from both studies, this thesis concludes with six design recommendations for creating developmentally appropriate, emotionally resonant AI education tools for early learners using narrative and play.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Volume Mount Devices</title>
<link href="https://hdl.handle.net/1721.1/164144" rel="alternate"/>
<author>
<name>Han, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/164144</id>
<updated>2025-12-04T03:09:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Volume Mount Devices
Han, Alan
As Moore's Law ends and AI demands increasingly tax our climate and resources, the limitations of two-dimensional electronics integration have become critical bottlenecks. Surface-mount devices (SMDs) remain entrenched in industry practice despite being insufficient for today's computing challenges and sustainability needs. This thesis introduces the volume mount device (VMD), a three-dimensional electronics packaging standard that bypasses the traditional die-to-server stack while offering a scalable, reversible framework inspired by natural ecosystems' circularity.&#13;
The VMD approach embeds both electrical function and mechanical structure into modular elements that assemble freely in 3D space. Rather than building circuits on planar PCBs, this system constructs functional circuits by linking components into a self-constraining lattice architecture. My current implementation leverages existing supply chains by incorporating SMD components on small tile PCBs, while establishing a pathway toward eventually replacing SMDs at the IC packaging level.&#13;
I developed a hybrid assembly system combining 3D printing and pick-and-place automation to build multi-layered electronic assemblies efficiently. Where prior work achieved only tens of parts at hundreds of components per hour (CPH), my system demonstrates automated assembly of hundreds of integrated elements at approximately 1000 CPH. I evaluate various geometric configurations, assess performance overhead compared to conventional approaches, and develop cost-effective, self-aligning connector interfaces for reliable joints—creating a foundation for electronics systems that can be assembled, disassembled, and reassembled as needed while improving resilience against supply chain disruptions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision</title>
<link href="https://hdl.handle.net/1721.1/164137" rel="alternate"/>
<author>
<name>Willis, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/164137</id>
<updated>2025-12-04T03:09:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Controlling for the Ionospheric and Baseline-Offset Uncertainties in the CHIME/FRB Outriggers VLBI Network for Milliarcsecond Precision
Willis, Jacob
Fast radio bursts (FRBs) are a novel form of radio transients discovered in 2007. These bright, extragalactic radio signals have an inferred all-sky rate of hundreds of detections per day. The properties of FRBs hold valuable clues about the extreme physical processes driving them while also holding information about the astrophysical plasmas they traverse on their journey to Earth. The Canadian Hydrogen Intensity Mapping Experiment (CHIME)/FRB project has led the field with the hundreds of FRB detections the collaboration has published to date. However, these detections typically have localization regions so large that we cannot identify a single host galaxy, never mind its local environment. To improve upon this, CHIME/FRB has been transformed into a very long baseline interferometry (VLBI) array, drastically increasing the angular resolution of CHIME/FRB from arcminute to sub-arcsecond precision.&#13;
&#13;
In this work, I present my contributions to commissioning the CHIME/FRB VLBI Outrigger station located at the Green Bank Observatory (GBO) in West Virginia. This includes measuring and validating GBO's exact position to enable the localization of FRBs to sub-arcsecond precision.&#13;
&#13;
For VLBI networks spanning thousands of kilometers, the difference in the local ionospheric environments is significant and leads to errors in the CHIME/FRB Outrigger localizations. I present a thin shell model of the ionosphere to parameterize the local ionospheric environment for each VLBI station. This model may be used to interpolate the error induced by the ionosphere in FRB observations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV</title>
<link href="https://hdl.handle.net/1721.1/164136" rel="alternate"/>
<author>
<name>Chou, Pin-Chun</name>
</author>
<id>https://hdl.handle.net/1721.1/164136</id>
<updated>2025-12-04T03:09:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Using Z-hadron correlations to probe the medium response in PbPb and pp collisions at √ˢNN = 5.02 TeV
Chou, Pin-Chun
The first measurement of Z-hadron two-particle correlation function are reported in PbPb collisions at √ˢNN = 5.02 TeV, using the PbPb collision data taken in 2018. The integrated luminosity of the PbPb data is 1.67 ±0.03 nb⁻¹ which made the analysis possible for the first time. Collision data with at at least one Z boson with 40 &lt;pT &lt;200 GeV/c are analyzed. The azimuthal angle distributions with respect to the Z bosons, whih are sensitive to modification of in-medium parton shower and medium recoils, are measured in central PbPb collisions. A significant modification of the two particle correlation in pseudorapidity difference and azimuthal angle difference is observed with respect to the reference measured in pp collisions. Those results are compared to phenomenological models that include medium-recoil, medium response and thermalization of the QGP wakes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color</title>
<link href="https://hdl.handle.net/1721.1/164134" rel="alternate"/>
<author>
<name>Myers, Paris G.</name>
</author>
<id>https://hdl.handle.net/1721.1/164134</id>
<updated>2025-12-04T03:09:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Materializing Light: Real-time, Handheld Fabrication of Programmable Structural Color
Myers, Paris G.
Structural color is nature’s programmable color palette. While pigments and dyes absorb light to produce color, structural color uses nanoscale, light-reflecting structures to appear iridescently colored. We present MorphoChrome, an optical device for real-time, handheld, programmable structural color fabrication. Analogous to painting with light, MorphoChrome creates multicolor, structurally colored designs&#13;
by exposing a commercially available holographic photopolymer film to user-controlled wavelengths. Within the device, red, green, and blue laser diodes go through an optical prism, combining light and producing mixed color outputs on the film. Additionally, we introduce a resin-based process to adhere and integrate the structurally-colored film with flexible and rigid objects and diverse making processes. In this thesis, we focus on the device optical design and fabrication, color-mixing,&#13;
color output UI controller, device aperture tips, and holographic photo-polymer film adherence process. We evaluate the available color space and color resolution, and demonstrate creative fabrication applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs</title>
<link href="https://hdl.handle.net/1721.1/164132" rel="alternate"/>
<author>
<name>Agarwal, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/164132</id>
<updated>2025-12-04T03:09:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Next Week Tonight: Simulating Counterfactual Narratives of the future using Agentic Knowledge Graphs
Agarwal, Gauri
Understanding the ripple effects of events—both real and speculative—is essential for navigating complex futures. Large Language Models (LLMs) have emerged as powerful tools that offer a user-friendly and narrative experience for question answering and reasoning across large corpuses of unstructured data [15, 96]. While LLMs can respond to complex ‘what-if’ questions, they typically provide single, unverifiable answers. Even with retrievalaugmented generation (RAG) that grounds LLM responses on external sources, the opacity of reasoning pathways undermines trust in model outputs [97]. Next Week Tonight builds on the narrative and reasoning capability of LLMs further by enhancing the exploration of what-if futures and making it more transparent and evidencebased. NWT exposes the underlying knowledge graph, allowing users to inspect inference pathways directly. This also enables the generation of multiple, diverse scenarios from a single condition—each following different but explainable causal chains. In testing 15 counterfactual prompts that span diverse news topics, NWT produced scenario narratives that were rated as significantly more causally coherent, transparent, and easier to audit than standard chat completions. Beyond technical performance, NWT reinvents scenario planning as an interactive narrative experience - encouraging curiosity, critical thinking, and deeper engagement with the complexities of future events. By surfacing not only what could happen but why and how, NWT aims to empower analysts, policymakers, and the public to navigate uncertainty with greater clarity and confidence. Github: https://github.com/viral-medialab/next-week-tonight
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferring Clonal Dynamics in Blood using Single-Cell Measurements</title>
<link href="https://hdl.handle.net/1721.1/164129" rel="alternate"/>
<author>
<name>Perry, Andrea N.</name>
</author>
<id>https://hdl.handle.net/1721.1/164129</id>
<updated>2025-12-04T03:09:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inferring Clonal Dynamics in Blood using Single-Cell Measurements
Perry, Andrea N.
In this work, we uniquely tag hematopoietic (blood) stem cells with genetic barcodes and follow their progeny over time to ask whether clonally related cells in myeloproliferative neoplasms (MPNs) favor particular blood cell fates. Myeloproliferative neoplasms are clonal disorders driven most frequently by the JAK2-V617F mutation, which arises in a single hematopoietic stem cell (HSC) and ultimately dominates the normal process of blood cell production. Although all patients carry the same driver mutation, they still branch into three distinct disease forms—essential thrombocythemia (ET), polycythemia vera (PV), or primary myelofibrosis (PMF)—and the reason for this variation remains unknown. One compelling hypothesis is that the JAK2-V617F mutation may arise in HSC subsets with intrinsic biases toward platlet-producing cells (as in ET) or red blood cell precursors (PV). To investigate this question, we analyzed bone-marrow cKit⁺ cells from mice engineered for inducible MPN disease and CRISPR array repair lineage tracing (CARLIN), using single-cell RNA sequencing. Our gene expression analysis shows that the mutation keeps key signaling and stress-response genes switched on and boosts growth-promoting enzymes, collectively pushing blood production toward the myeloid line. At the resolution of individual CARLIN clones (i.e. cells grouped by a shared progenitor), however, we observe no robust mutation-induced lineage bias—an outcome attributable to limited clone recovery and inter-mouse variability. Crucially, this work establishes a scalable analysis pipeline for future, higher-yield CARLIN experiments. Enhancing lineage-tracing sensitivity, barcode diversity, and biological replication will be essential to test whether these interferon-/stress-response and kinase programs manifest as subtle, clone-level fate biases in JAK2-driven MPN.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks</title>
<link href="https://hdl.handle.net/1721.1/164059" rel="alternate"/>
<author>
<name>Zarkos, Christos V.</name>
</author>
<id>https://hdl.handle.net/1721.1/164059</id>
<updated>2025-11-26T03:06:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">SERenaDE: Hardware Acceleration of Cloud Serialization Frameworks
Zarkos, Christos V.
Serialization frameworks are a fundamental operation of datacenters, as they enable language- and platform-neutral communication and storage. However, software serialization faces major performance bottlenecks, resulting in a significant fraction of cloud cycles dedicated to this process. Prior work has proposed specialized hardware accelerators to address these overheads. While these proposals achieve considerable speedups, they are expensive in terms of verification, fabrication, and deployment, and often hardcode too many details about the (de)serialization framework in hardware. We propose SERenaDE, a serialization framework designed to integrate general-purpose accelerators currently deployed in datacenters in order to accelerate and offload serialization to hardware. Specifically, we repurpose the Intel In-Memory Analytics Accelerator (IAA), an accelerator engine offering fast compression, to enable fast and transparent to the user serialization and deserialization, completely removing software serialization from the execution pipeline. We evaluate our system on latest-generation production machines, both with synthetic microbenchmarks, and open-source representative fleet-wide benchmarks. Our results show comparable performance in terms of per-request latency across all types of messages, while significantly improving throughput - especially at the tail -, maintaining thread scalability and achieving high compression ratios alongside substantial speedups for larger messages. Under 95th latency percentile latency constraints SERenaDE improves serialization and deserialization throughput by 13% and 30% respectively, while achieving from 0.2x to 6.94x smaller serialized message sizes for messages of a total memory layout larger than 4KB.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-Optimized Design of 3D Shapes with Part-Based Control</title>
<link href="https://hdl.handle.net/1721.1/164056" rel="alternate"/>
<author>
<name>Zhan, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/164056</id>
<updated>2025-11-26T03:06:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Physics-Optimized Design of 3D Shapes with Part-Based Control
Zhan, Sean
We introduce PhysiOPart, a computational approach for rapid generative design of 3D objects optimized for physical integrity. PhysiOPart enables users to edit and combine object parts to explore a vast design space. To model continuous surfaces of arbitrary resolution without topology restrictions, we parametrize parts with neural implicit representations. However, when parts are assembled to form an object, the resulting geometry is not guaranteed to be functional. Existing generative modeling approaches use task-specific neural predictors to approximate physical behaviors with limited accuracy. We propose an end-to-end differentiable physics simulation pipeline that performs linear static analysis to optimize for user-specified objectives, leveraging learned geometry priors. Our part-based formulation with finite element method is highly customizable, allowing for user-defined per-part materials, loads, and boundary conditions. The optimized designs exhibit improved physical behavior and can be fabricated.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Assembly of Curved Structures from Flat Configuration</title>
<link href="https://hdl.handle.net/1721.1/164055" rel="alternate"/>
<author>
<name>Zaman, Akib</name>
</author>
<id>https://hdl.handle.net/1721.1/164055</id>
<updated>2025-11-26T03:06:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fast Assembly of Curved Structures from Flat Configuration
Zaman, Akib
Imagine deploying an emergency shelter that transitions seamlessly from a flat configuration to a lifted structure, or a folded robot that is sent through a tunnel and subsequently activated to expand into a larger form at the endpoint, with a single, collective pull of strings. This scenario raises two critical questions: (i) how to decompose the structure into a flat state that encodes the 3D geometry, and (ii) where to place strings through the unit modules to achieve complete actuation. Although these questions have been explored individually, comprehensive solutions remain scarce. To address this challenge, this thesis presents a computational approach for designing freeform structures that can be rapidly assembled from initially flat configurations by a single string pull. Target structures are decomposed into rigid, spatially varied quad tiles optimized to approximate a user-provided surface, forming a flat mechanical linkage. A two-step algorithm is then applied to determine a physically realizable string path that controls only a subset of tiles, enabling smooth actuation from flat to assembled configuration. First, the minimal subset of tiles required for string control is computed by considering both the structure’s geometry and inter-tile interactions. Second, a valid string path is identified through these tiles that minimizes friction, thereby transforming the flat linkage into the target 3D form upon tightening a single string. The resulting designs can be manufactured in flat form using computational fabrication techniques: such as 3D printing, CNC milling, or molding, thereby simplifying both production and transportation. Validation is provided through a series of physical prototypes and application case studies, ranging from medical devices and space shelters to large-scale architectural installations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems</title>
<link href="https://hdl.handle.net/1721.1/164033" rel="alternate"/>
<author>
<name>Zhang, Ziyu</name>
</author>
<id>https://hdl.handle.net/1721.1/164033</id>
<updated>2025-11-26T03:06:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph-based Vector Search Algorithms for Retrieval-Augmented AI Systems
Zhang, Ziyu
The recent advancement of large language models (LLMs) and large multimodal models (LMMs) greatly enhances the capabilities of AI systems such as recommendation systems and coding assistants, making them more practical for real-world deployment. However, these models cannot directly interact with large volumes of data in a knowledge corpus during inference/task time due to inherent architectural limits and cost concerns. Encoding data into vector embeddings and leveraging approximate nearest neighbor search (ANNS) have thus become an important data processing primitive in AI systems following the introduction of retrievel-augmented generation (RAG). However, the complexity of tasks these AI systems aim to solve introduces challenges for existing ANNS algorithms. I developed methods to expand existing ANNS algorithms to address two such challenges: freshness and heterogeneity in the data.&#13;
&#13;
Graph-based ANNS algorithms have been proven to have superb cost versus approximation quality trade-off yet follow a simple intuition of best-first search. I focus on adapting graph-based ANNS algorithms to two settings featuring emerging challenges. (1) Data is updated constantly. Existing algorithms are inefficient under deletions and not robust against different orderings in the workload. I propose methods addressing these problems and developed an algorithm supporting updates effectively and efficiently based on Vamana, a state-of-the-art graph-based ANNS algorithm. (2) Data is heterogeneous in format, modality, and how they relate to a query, making the similarity difficult to capture by the canonical ANNS definition. I explore ways to model the similarity between heterogeneous sources and using graph-based ANNS approaches to perform semantic search in this setting. I test this approach under an end-to-end multimodal question-answering system developed in-house.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of pGaN-gate power HEMTs</title>
<link href="https://hdl.handle.net/1721.1/164028" rel="alternate"/>
<author>
<name>Yu, Yue</name>
</author>
<id>https://hdl.handle.net/1721.1/164028</id>
<updated>2025-11-26T03:06:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterization of pGaN-gate power HEMTs
Yu, Yue
This thesis presents a comprehensive study of p-GaN gate GaN High Electron Mobility Transistors (HEMTs) with a focus on understanding how fabrication process variations and gate structural designs impact key electrical performance metrics. Five industry-fabricated wafers, each processed with distinct etch depths, contact strategies, and p-GaN surface configurations, were characterized using a combination of DC and pulsed I–V measurements. Full-transistor modules were evaluated alongside specialized test structures to enable both system-level and localized analysis. DC measurements using the Keysight B1505A system revealed that more aggressive gate contact schemes improved ON-resistance and transconductance, but often at the cost of increased gate leakage and reduced threshold control. Pulsed-IV characterization with the Auriga AU4750 system uncovered dynamic Ron degradation behavior and charge trapping effects, especially under high drain bias conditions. Extracted time constants demonstrated process-dependent trends, with wafers retaining more of the p-GaN surface exhibiting slower charge detrapping and more severe transient effects. Specialized test structures provided additional insights into gate lateral conduction, sheet resistance, and contact asymmetry, reinforcing the connection between device layout, processing, and observed variability. These findings highlight critical trade-offs in the design and fabrication of p-GaN gate GaN HEMTs and offer design-aware strategies for optimizing performance and reliability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Crystallization of Glauber's salt</title>
<link href="https://hdl.handle.net/1721.1/164009" rel="alternate"/>
<author>
<name>Coberly, C. Wheeler.</name>
</author>
<id>https://hdl.handle.net/1721.1/164009</id>
<updated>2025-11-25T06:32:25Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Crystallization of Glauber's salt
Coberly, C. Wheeler.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 39).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of angular scintillation of radar echoes</title>
<link href="https://hdl.handle.net/1721.1/164006" rel="alternate"/>
<author>
<name>Graham, James William.</name>
</author>
<id>https://hdl.handle.net/1721.1/164006</id>
<updated>2025-11-25T06:32:44Z</updated>
<published>1952-01-01T00:00:00Z</published>
<summary type="text">Analysis of angular scintillation of radar echoes
Graham, James William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1952
</summary>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design and construction of an ultra-high vacuum field-ion microscope.</title>
<link href="https://hdl.handle.net/1721.1/164002" rel="alternate"/>
<author>
<name>Olson, Gregory Bruce.</name>
</author>
<id>https://hdl.handle.net/1721.1/164002</id>
<updated>2025-11-25T06:32:36Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">The design and construction of an ultra-high vacuum field-ion microscope.
Olson, Gregory Bruce.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1970; Bibliography: leaf 35.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shipleasing as a prospective method of l/t financing for international shipowners.</title>
<link href="https://hdl.handle.net/1721.1/163999" rel="alternate"/>
<author>
<name>Angelicoussis, John Anthony.</name>
</author>
<id>https://hdl.handle.net/1721.1/163999</id>
<updated>2025-11-25T06:32:28Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Shipleasing as a prospective method of l/t financing for international shipowners.
Angelicoussis, John Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1974; Includes bibliographical references.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates</title>
<link href="https://hdl.handle.net/1721.1/163997" rel="alternate"/>
<author>
<name>Lehman, LeNore Louise.</name>
</author>
<id>https://hdl.handle.net/1721.1/163997</id>
<updated>2025-11-25T06:32:40Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">The effects of rapid thermal annealing on gallium arsenide grown by MOCVD on silicon substrates
Lehman, LeNore Louise.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1988; Includes bibliographical references.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information</title>
<link href="https://hdl.handle.net/1721.1/163996" rel="alternate"/>
<author>
<name>Huttenlocher, Daniel P.</name>
</author>
<id>https://hdl.handle.net/1721.1/163996</id>
<updated>2025-11-25T06:32:20Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Acoustic-phonetic and lexical constraints in word recognition: lexical access using partial information
Huttenlocher, Daniel P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1984; Bibliography: leaves 73-77.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies</title>
<link href="https://hdl.handle.net/1721.1/163994" rel="alternate"/>
<author>
<name>Perkins, Edwin H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163994</id>
<updated>2025-11-25T06:32:33Z</updated>
<published>1930-01-01T00:00:00Z</published>
<summary type="text">An investigation of the merits of a four-element coplanar vacuum tube when used as a modulator at carrier telephone frequencies
Perkins, Edwin H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1930; Includes bibliographical references (leaf 115).
</summary>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A precision method for the determination of dew points of complex gaseous systems</title>
<link href="https://hdl.handle.net/1721.1/163991" rel="alternate"/>
<author>
<name>Cox, John Tatum.</name>
</author>
<id>https://hdl.handle.net/1721.1/163991</id>
<updated>2025-11-25T06:32:38Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">A precision method for the determination of dew points of complex gaseous systems
Cox, John Tatum.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 43).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AbsInt-AI: Language Models for Abstract Interpretation</title>
<link href="https://hdl.handle.net/1721.1/163731" rel="alternate"/>
<author>
<name>Wang, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163731</id>
<updated>2025-11-18T06:27:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AbsInt-AI: Language Models for Abstract Interpretation
Wang, Michael
Static program analysis is a foundational technique in software engineering for reasoning about program behavior. Traditional static analysis algorithms model programs as logical systems with well-defined semantics, enabling strong guarantees such as never missing a bug. However, traditional analyses almost always rely on uniform, hard-coded heap abstractions. While more adaptive abstractions are possible in theory, they are rarely implemented in practice due to their complexity and fragility. This limits their precision and flexibility, especially in dynamic languages like JavaScript, where heap structures are heterogeneous and difficult to analyze statically. In this work, we introduce AbsInt-AI, a language-model-guided static analysis framework based on abstract interpretation with adaptive, per-object heap abstractions for JavaScript. This enables the analysis to leverage high-level cues, such as naming conventions and access patterns, without requiring brittle, hand-engineered heuristics. Importantly, the LM agent operates within a bounded interface and never directly manipulates program state, preserving the soundness guarantees of abstract interpretation. ABSINT-AI reduces false positives by up to 34% for bug detection compared to traditional static analysis while maintaining soundness. Our ablations show that the LM’s interactions with the analysis environment are crucial, outperforming non-agentic direct LM predictions by 25%.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift</title>
<link href="https://hdl.handle.net/1721.1/163730" rel="alternate"/>
<author>
<name>Sharma, Harsha</name>
</author>
<id>https://hdl.handle.net/1721.1/163730</id>
<updated>2025-11-18T06:27:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Video Streaming at Scale Across Devices, Networks, and Temporal Drift
Sharma, Harsha
Video-streaming platforms tune dozens of playback parameters across thousands of client devices. Our measurements from Prime Video show that device-specific tuning can enhance stream quality. Yet traditional blackbox optimization methods like Bayesian optimization become prohibitively expensive due to the large configuration space and the constant emergence of new device types. We introduce AZEEM, a scalable recommendation system leveraging few-shot prediction to rapidly identify promising configurations for new devices. The key insight behind AZEEM is that devices exhibit performance similarities that enable predictions from limited observations. Trained on offline data of device-playback configuration interactions, AZEEM efficiently narrows down the search space to a small set of configurations likely to contain optimal or near-optimal candidates. Additionally, AZEEM addresses temporal distribution shift—where the best-performing configurations change over time—by recommending a small, robust set of candidates rather than a single configuration. Evaluations using largescale real-world datasets show that AZEEM reduces exploration cost by 5.8 − 13.6× and improves stream quality compared to state-of-the-art Bayesian optimization and multi-armed bandit approaches, enabling effective device-specific optimization at scale. The material in this thesis is primarily sourced from the paper "Predict, Prune, Play: Efficient Video Playback Optimization Under Device Diversity and Drift" authored by Harsha Sharma, Pouya Hamadanian, Arash Nasr-Esfahany, Zahaib Akhtar, Mohammad Alizadeh, which is currently under submission.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks</title>
<link href="https://hdl.handle.net/1721.1/163729" rel="alternate"/>
<author>
<name>Song, Shixin</name>
</author>
<id>https://hdl.handle.net/1721.1/163729</id>
<updated>2025-11-18T06:27:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Oreo: Protecting ASLR Against Microarchitectural&#13;
Attacks
Song, Shixin
Address Space Layout Randomization (ASLR) is one of the most prominently deployed mitigations against memory corruption attacks. ASLR randomly shuffles program virtual addresses to prevent attackers from knowing the location of program contents in memory. Microarchitectural side channels have been shown to defeat ASLR through various hardware mechanisms. We systematically analyze existing microarchitectural attacks and identify multiple leakage paths. Given the vast attack surface exposed by ASLR, it is challenging to effectively prevent leaking the ASLR secret against microarchitectural attacks. Motivated by this, we present Oreo, a software-hardware co-design mitigation that strengthens ASLR against these attacks. Oreo uses a new memory mapping interface to remove secret randomized bits in virtual addresses before translating them to their corresponding physical addresses. This extra step hides randomized virtual addresses from microarchitecture structures, preventing side channels from leaking ASLR secrets. Oreo is transparent to user programs and incurs low overhead. We prototyped and evaluated our design on Linux using the hardware simulator gem5.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Counting Substructures with Graph Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/163728" rel="alternate"/>
<author>
<name>Tahmasebi, Behrooz</name>
</author>
<id>https://hdl.handle.net/1721.1/163728</id>
<updated>2025-11-18T06:27:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Counting Substructures with Graph Neural Networks
Tahmasebi, Behrooz
To achieve a graph representation, most Graph Neural Networks (GNNs) follow two steps: first, each graph is decomposed into a number of subgraphs (which we call the recursion step), and then the collection of subgraphs is encoded by several iterative pooling steps. While recently proposed higher-order networks show a remarkable increase in the expressive power through a single recursion on larger neighborhoods followed by iterative pooling, the power of deeper recursion in GNNs without any iterative pooling is still not fully understood. To make it concrete, we consider a pure recursion-based GNN which we call Recursive Neighborhood Pooling GNN (RNPGNN). The expressive power of an RNP-GNN and its computational cost quantifies the power of (pure) recursion for a graph representation network. We quantify the power by means of counting substructures, which is one main limitation of the Message Passing graph Neural Networks (MPNNs), and show how RNP-GNN can exploit the sparsity of the underlying graph to achieve low-cost powerful representations. We also compare the recent lower bounds on the time complexity and show how recursion-based networks are near optimal.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand</title>
<link href="https://hdl.handle.net/1721.1/163727" rel="alternate"/>
<author>
<name>Norton, Wil J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163727</id>
<updated>2025-11-18T06:27:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Control of a Multi-Fingered Soft-Rigid Hybrid&#13;
Robotic Hand
Norton, Wil J.
In robot hands, compliance improves the quality of grasps and allows for robustness in contact with the environment, which is why soft robot hands, which are inherently compliant, generate such interest despite being complex to control and model. In prior work, our lab developed a soft-rigid hybrid architecture for a robot finger, with the intention of making a compliant finger that is as easy to control as a rigid robot. This thesis details the work done to take this architecture and develop it into a five-fingered dexterous gripper capable of highly compliant grasping — over several iterations, we create an integrated tendon-driven hand that is robust, maintainable, and inexpensive. We develop a precise controller for the soft-rigid hybrid finger, and extend it for both position and task space control of the hand — additionally we implement variable stiffness control within the controller without the need for additional hardware, via adjusting gain values in the control loop. We test the ability of the hand to complete the full set of human grasping postures, and demonstrate that the soft-rigid architecture enables a high degree of generalization, able to complete 28 of the 33 identified human grasp postures. Additionally, tests illustrate the hand’s advantages in completing traditionally difficult manipulation tasks such as picking up thin deformable objects (such as a dollar bill or folding cloth) as well as in interfacing with soft or delicate target objects. We adapt a teleoperation system to map the movements of the robot gripper to a glove worn by a human operator, and evaluate the usability of the hand as a teleoperation target for completing several tasks — we illustrate promising results that the compliance of the hand compensates for operator error and allows for fast completion of tasks requiring environmental or object contact, traditionally difficult tasks for existing rigid robots. Finally, we discuss the use of the teleoperation system to record demonstrations which we then use to train an imitation learning model, utilizing an implementation of denoising diffusion probabilistic models, to complete grasping tasks. We show that our soft-rigid fingers allow a dexterous hand to be trained to perform autonomous grasping with a relatively small set of expert demonstrations, and that the compliance of the physical structure allows for variance in the environment and object position to be compensated for by the physical properties of the hand.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus</title>
<link href="https://hdl.handle.net/1721.1/163726" rel="alternate"/>
<author>
<name>Qu, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/163726</id>
<updated>2025-11-18T06:27:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Multi-Modality Imaging Cart for&#13;
Barrett’s Esophagus
Qu, Ashley
Barrett’s Esophagus (BE) is a key precursor to esophageal adenocarcinoma (EAC), but current screening and risk assessment methods are ineffective and costly. Many BE cases remain undiagnosed due to asymptomatic patients, and existing risk algorithms rely on patient data rather than biomarkers. This work aims to start building a risk progression model by using a multi-modal imaging system combining autofluorescence spectroscopy, optical coherence tomography, and diffuse reflectance spectroscopy to perform label-free optical biopsies on ex-vivo tissue. These images will be co-registered and validated with histological biomarkers for BE. The ultimate goal is to develop a non-invasive endoscopic capsule and algorithm to better assess BE progression and enhance early detection of EAC.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Complexity of Basis-Restricted Local Hamiltonians</title>
<link href="https://hdl.handle.net/1721.1/163725" rel="alternate"/>
<author>
<name>Ma, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/163725</id>
<updated>2025-11-18T06:27:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Complexity of Basis-Restricted Local Hamiltonians
Ma, Henry
A major goal of quantum complexity theory is to understand which computational problems can be solved with access to certain quantum resources. The subfield of Hamiltonian complexity specifically considers computational problems that ask about properties of local Hamiltonians, which are of critical importance in quantum complexity because they can be viewed as quantum generalizations of classical constraint satisfaction problems. In this work, we study the complexity of certain restricted variants of the Quantum-k-Sat problem, a quantum analog of the NP-complete k-Sat problem. We introduce new variants of Quantum-k-Sat which place a basis restriction on the input Hamiltonian H = Σᵢ hᵢ . Each variant is defined by a fixed collection of bases B₁, . . . , Bᵣ of n-qubit space. We require that each Hamiltonian term hi must be diagonal in one of these bases. Our results resolve the complexity of certaim basis-restricted variants of Quantum-k-Sat. First we show the Quantum-6-Sat problem with Hamiltonian terms restricted to be diagonal in an X/Z mixed basis is QMA₁-complete. Second, we combine basis restriction with the restriction of commutativity, and show the following easiness result, which applies generally to higher-level quantum systems (qudits) and bases Q and R (which are real-valued and satisfy an overlap condition): The commmuting Quantum-Sat problem on qudits, where Hamiltonian terms are either diagonal in the Q basis, the R basis, or a single mixed Q/R basis, is in NP.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Future of Personalized, Aligned Language Models</title>
<link href="https://hdl.handle.net/1721.1/163724" rel="alternate"/>
<author>
<name>Han, Seungwook</name>
</author>
<id>https://hdl.handle.net/1721.1/163724</id>
<updated>2025-11-18T06:27:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Future of Personalized, Aligned Language Models
Han, Seungwook
Aligning Large Language Models (LLMs) to cater to different human preferences, learning new skills, and unlearning harmful behavior is an important problem. Search-based methods, such as Best-of-N or Monte-Carlo Tree Search, are effective, but impractical for LLM adaptation due to their high inference cost. On the other hand, using Reinforcement Learning (RL) for adaptation is computationally efficient, but performs worse due to the optimization challenges in co-training the value function and the policy. We present a new framework for reward optimization, Value Augmented Sampling (VAS), that can maximize different reward functions using data sampled from only the initial, frozen LLM. VAS solves for the optimal reward-maximizing policy without co-training the policy and the value function, making the optimization stable, outperforming established baselines, such as PPO and DPO, on standard benchmarks, and achieving comparable results to Best-of-128 with lower inference cost. Unlike existing RL methods that require changing the weights of the LLM, VAS does not require access to the weights of the pre-trained LLM. Thus, it can even adapt LLMs (e.g., ChatGPT), which are available only as APIs. In addition, our algorithm unlocks the new capability of composing several rewards and controlling the extent of each one during deployment time. By bringing together stability, flexibility, and efficiency, we explore the future of aligned, personalized language models that can be adapted seamlessly to meet a wide spectrum of human preferences.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock</title>
<link href="https://hdl.handle.net/1721.1/163723" rel="alternate"/>
<author>
<name>Ji, Yewon</name>
</author>
<id>https://hdl.handle.net/1721.1/163723</id>
<updated>2025-11-18T06:27:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Post-Carbon Seoul: Low-Carbon Interventions for a High-Carbon Housing Stock
Ji, Yewon
Seoul, South Korea, exhibits an exceptionally rapid residential demolition-reconstruction cycle of approximately 30 - 40 years, resulting in one of the world’s shortest apartment building lifespans. This entrenched status quo, fueled by post-war policies, real estate speculation, and finance models treating housing primarily as a short-term asset, contrasts sharply with other developed nations. This research critiques South Korea’s model of rapid demolition for its significant, often overlooked, environmental impacts and social costs. To evaluate alternatives, the methodology comprises three key stages: A) a comparative analysis of the financial frameworks and sustainability outcomes characterizing Western residential longevity versus the unique Korean housing model; B) the formulation of a novel alternative practice focused on adaptive reuse and retrofitting, specifically tailored to integrate within South Korea’s economic system and cultural context; and C) the practical demonstration and assessment of this practice through a design case study, incorporating strategies like phased interventions and low-carbon materials such as mass timber. The analysis reveals that this alternative extends building lifespan and achieves substantial carbon reductions by preserving the embodied carbon within existing structures. It offers long-term financial benefits, presenting a viable economic pathway aligning key stakeholder interests through enduring value over speculative gains.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach</title>
<link href="https://hdl.handle.net/1721.1/163722" rel="alternate"/>
<author>
<name>Noorbakhsh, Kimia</name>
</author>
<id>https://hdl.handle.net/1721.1/163722</id>
<updated>2025-11-18T06:27:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling Automatic Question Generation to Large Documents: A Concept-Driven Approach
Noorbakhsh, Kimia
Assessing and enhancing human learning through question-answering is vital, especially when dealing with large documents, yet automating this process remains challenging. While large language models (LLMs) excel at summarization and answering queries, their ability to generate meaningful questions from lengthy texts remains underexplored. We propose Savaal, a scalable question-generation system with three objectives: (i) scalability, enabling question-generation from hundreds of pages of text (ii) depth of understanding, producing questions beyond factual recall to test conceptual reasoning, and (iii) domainindependence, automatically generating questions across diverse knowledge areas. Instead of providing an LLM with large documents as context, Savaal improves results with a threestage processing pipeline. Our evaluation with 76 human experts on 71 papers and PhD dissertations shows that Savaal generates questions that better test depth of understanding by 6.5× for dissertations and 1.5× for papers compared to a direct-prompting LLM baseline. Notably, as document length increases, Savaal’s advantages in higher question quality and lower cost become more pronounced.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ab initio modeling of superconducting nanowire single-photon detectors</title>
<link href="https://hdl.handle.net/1721.1/163720" rel="alternate"/>
<author>
<name>Simon, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/163720</id>
<updated>2025-11-18T06:27:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ab initio modeling of superconducting nanowire single-photon detectors
Simon, Alejandro
Single-photon detectors are widely used in modern communication, sensing, and computing technology. Among these detectors, superconducting nanowire single-photon detectors (SNSPDs) possess the highest detection efficiencies, the shortest timing jitter, and the lowest dark count rates. However, for several applications, including those in the biological, astronomical, and quantum computation fields, there remains a desire to push the capabilities of modern detectors even further. To realize these improvements, it is necessary to develop an understanding of the physical mechanisms underpinning single-photon detection in these devices. However, current models are phenomenological, requiring experimental data for input, or can only recover qualitative agreement, severely limiting their predictive ability. In this thesis, we begin by describing the existing theoretical frameworks used to model superconducting materials and devices, both in equilibrium and nonequilibrium. We then illustrate an example of a phenomenological approach to modeling superconducting devices by developing an electrothermal model for the superconducting nanowire cryotron and demonstrating its efficacy in predicting the DC behavior and power dissipation of the device. Finally, we expand upon the current state-of-the-art SNSPD theory by utilizing recent advances in density functional theory to develop an ab initio model for the photon detection mechanism of SNSPDs. We then validate the predictions of our model with experimental data from the literature. The resulting model requires no experimental input, provides quantitative predictions of SNSPD performance, and can be extended to describe other superconducting devices, thus enabling the possibility of conducting a systematic search of materials for enhanced device performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock</title>
<link href="https://hdl.handle.net/1721.1/163718" rel="alternate"/>
<author>
<name>Velez, Gustavo A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163718</id>
<updated>2025-11-18T06:27:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Trapping and Laser Cooling an Ensemble of Ytterbium-171 Atoms for use in an Atomic Clock
Velez, Gustavo A.
Optical lattice clocks require careful preparation of atomic ensembles in order to ensure homogeneous interactions with the clock laser. We demonstrate loading and laser cooling of an ensemble of ytterbium-171 atoms in a 2D optical dipole trap created by an optical cavity. Our loading method ensures that all atoms are located in the intersection of 2 perpendicular dipole traps as verified through absorption imaging. Raman sideband cooling was used to cool the atomic ensemble from 15.7 uK to 6.3 uK as measured through optical sideband spectroscopy on the 578 nm clock transition. Together, these steps improved the transfer of atoms during a Rabi oscillation from the ground to the clock state from approximately 45 percent excitation fraction to 80 percent excitation fraction. The final atomic ensemble preparation is now sufficient for running an atomic clock.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving and Analyzing Model Merging Methods for Adaptation</title>
<link href="https://hdl.handle.net/1721.1/163717" rel="alternate"/>
<author>
<name>Pari, Jyothish</name>
</author>
<id>https://hdl.handle.net/1721.1/163717</id>
<updated>2025-11-18T06:27:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving and Analyzing Model Merging Methods for Adaptation
Pari, Jyothish
In this work, we explore the limitations of combining models by averaging intermediate features, referred to as model merging, and propose a new direction for achieving collective model intelligence through what we call compatible specialization. Current methods for model merging, such as parameter and feature averaging, struggle to effectively combine specialized models due to representational divergence during fine-tuning. As models specialize to their individual domains, their internal feature representations become increasingly incompatible, leading to poor performance when attempting to merge them for new tasks. We analyze this phenomenon using centered kernel alignment (CKA) and show that as models specialize, the similarity in their feature space structure diminishes, hindering their capacity for collective use. To address these challenges, we investigate routing-based merging strategies, which offer more flexible methods for combining specialized models by dynamically routing across different layers. This allows us to improve on existing methods by combining features from multiple layers rather than relying on fixed, layer-wise combinations. However, we find that these approaches still face limitations when layers within models are representationally incompatible. Our findings highlight the importance of designing new approaches for model merging that operate on well-defined input and output spaces, similar to how humans communicate through language rather than intermediate neural activations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications</title>
<link href="https://hdl.handle.net/1721.1/163716" rel="alternate"/>
<author>
<name>Pan, Eileen</name>
</author>
<id>https://hdl.handle.net/1721.1/163716</id>
<updated>2025-11-18T06:27:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating Differences in GPT-4 Treatment by Gender in Healthcare Applications
Pan, Eileen
LLMs already permeate medical settings, supporting patient messaging, medical scribing, and chatbots. While prior work has examined bias in medical LLMs, few studies focus on realistic use cases or analyze the source of the bias. To assess whether medical LLMs exhibit differential performance by gender, we audit their responses and investigate whether the disparities stem from implicit or explicit gender cues. We conduct a large-scale human evaluation of GPT-4 responses to medical questions, including counterfactual gender pairs for each question. Our findings reveal differential treatment based on the original patient gender. Specifically, responses for women more often recommend supportive resources, while those for men advise emergency care. Additionally, LLMs tend to downplay medical urgency for female patients and escalate it for male patients. Given rising interest in “LLM-as-a-judge” approaches, we also evaluate whether LLMs can serve as a proxy for human annotators in identifying disparities. We find that LLM-generated annotations diverge from human assessments in heterogeneous ways, particularly regarding error detection and relative urgency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications</title>
<link href="https://hdl.handle.net/1721.1/163715" rel="alternate"/>
<author>
<name>López Ángeles, Christian Emmanuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163715</id>
<updated>2025-11-18T06:27:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Highly Integrated Graphene-Based Chemical Sensing Platform for Structural Monitoring Applications
López Ángeles, Christian Emmanuel
Two-dimensional materials, such as graphene, hold promise for sensing applications. Graphene's remarkable surface-to-volume ratio, when employed as a transducer, enables the sensor channel to be readily modulated in response to chemical changes in proximity to its surface, effectively converting chemical signals into the electrical domain. However, their utilization has been constrained due to variations in device-to-device performance arising from synthesis and fabrication processes. To address this challenge, we employ Graphene Field Effect Transistors (GFETs) in developing a robust and multiplexed chemical sensing platform. This platform comprises a silicon chip with multiple arrays of sensing units distributed on its surface. This chip is coupled with custom-designed high-speed readout electronics for structural monitoring applications. For example, in harsh environmental conditions, structures constructed from reinforced concrete may experience degradation due to corrosion, a chemical process initiated by carbonation from atmospheric CO₂ and significant fluctuations in temperature and humidity. Under normal conditions, concrete maintains a pH level within the alkaline range of 13 to 14. However, when subjected to carbonation, its pH decreases to values between 8 and 9. Our platform excels in real-time pH monitoring. By conducting I-V sweep measurements in the sensor channel, we have established a correlation between [H⁺] concentration and the device transfer characteristics, i.e. gate-source voltage (&#119881;_&#119866;&#119878;) at graphene's Dirac point with an accuracy of roughly 97%. Additionally, we evaluate changes in graphene channel resistance induced by pH variations. This system and correlation allow for the prompt detection of any deviations induced by corrosion within a concrete environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards More Interpretable AI With Sparse Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/163714" rel="alternate"/>
<author>
<name>Engels, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/163714</id>
<updated>2025-11-18T06:26:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards More Interpretable AI With Sparse Autoencoders
Engels, Joshua
While large language models demonstrate remarkable capabilities across diverse domains, the specific representations and algorithms they learn remain largely unknown. The quest to understand these mechanisms holds dual significance: scientifically, it represents a fundamental inquiry into the principles underlying intelligence, while practically–and with growing urgency– it is vital for mitigating risks from these very same increasingly powerful systems. The initial section of this thesis tackles this challenge of interpreting internal language model representations (features) by employing sparse autoencoders (SAEs). An SAE decomposes neural network hidden states into a potentially more interpretable basis. In Chapter 2, we introduce an unsupervised, SAE-based methodology that successfully identifies inherently multi-dimensional features. Notably, we establish that language models causally represent concepts such as days of the week and months of the year using circular structures. This work provided the first definitive evidence of causal, multi-dimensional features, thereby refuting the one-dimensional linear representation hypothesis. Chapter 3 further assesses whether SAEs identify “true” atomic language model features. We compare the generalization performance and data efficiency of linear probes trained on SAE latents against those trained on the original hidden state basis. The negative outcomes of these experiments suggest limitations in SAEs for capturing the true ontology of language models. Motivated by the aforementioned limitations, the second part of this thesis investigates sparse autoencoders themselves, exploring potential improvements and characterizing their failure modes. Chapter 4 examines the portion of activations not reconstructed by SAEs, which we term “Dark Matter.” We find that a significant fraction of this dark matter is linearly predictable, and furthermore, that specific tokens poorly reconstructed by SAEs remain largely consistent across SAE sizes and sparsities. This suggests that SAEs may systematically fail to capture certain input subspaces, which we hypothesize to contain inherently dense features. Subsequently, Chapter 5 investigates a method to enhance SAE utility: freezing the learned SAE parameters and finetuning the surrounding language model components to minimize KL divergence with the original model’s output distribution. This technique results in a 30% to 55% decrease in the cross-entropy loss gap incurred by inserting the SAE into the model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems</title>
<link href="https://hdl.handle.net/1721.1/163713" rel="alternate"/>
<author>
<name>Lawson, Riley E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163713</id>
<updated>2025-11-18T06:27:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transmission Line Dynamics Modeling For Power Electronics-Enabled Control in the Electric Power Systems
Lawson, Riley E.
In the analysis and operation of electric power systems, understanding the rates at which dynamic phenomena evolve is critical. Classically, power systems operate on multiple time scales, with slower mechanical dynamics from synchronous machines, faster electromechanical controls and protection, and very fast electrical dynamics from transmission networks. This time scale separation results in system modeling techniques which neglect certain component dynamics. However, in systems with significant penetration of power electronic devices and under fast time scale phenomena, the rates at which dynamics evolve become less separated, necessitating the modeling of all system dynamics. In large-scale systems, this becomes computationally challenging due to the high dimensionality of the interconnected system model. This work investigates the role transmission line dynamics play at very fast time scales in power systems. Theoretical results are presented to analyze which transmission line dynamics contribute significantly to power system dynamics, allowing for the intelligent incorporation of transmission line dynamics into computationally tractable models. For the first time, the use of control co-design techniques are demonstrated algorithmically to design fast power electronics-enabled control to stabilize unstable dynamics in electric power systems. This technique allows the design of controls, in an iterative way, to create stable interconnected systems. Finally, transmission line modeling impacts on the design of protection on fast time scales is analyzed. This work presents techniques to protect from short circuits in response to load disconnections, and introduces DC circuit breaker configurations to cause current commutation. In the modern day, power systems operators possess the technology to implement fast control of dynamics, however, due to insufficient information on how to model and prepare for them, system operators instead rely on using conventional, overly conservative control schemes. This work aims to bridge this gap by presenting methodologies to incorporate these dynamics into next-generation system models, and how to design control and protection to mitigate the risks these fast dynamics pose.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundation Models for Protein Phenotype Prediction</title>
<link href="https://hdl.handle.net/1721.1/163712" rel="alternate"/>
<author>
<name>Calef, Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/163712</id>
<updated>2025-11-18T06:27:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundation Models for Protein Phenotype Prediction
Calef, Robert
Understanding the roles of human proteins remains a major challenge, with approximately 20% of human proteins lacking known functions and more than 40% missing context-specific functional insights. Even well-annotated proteins are often poorly characterized in diverse biological contexts, disease states, and perturbations. We present ProCyon, a foundation model for modeling, generating, and predicting protein phenotypes across five interrelated knowledge domains: molecular functions, therapeutic mechanisms, disease associations, functional protein domains, and molecular interactions. To support this, we created ProCyon-Instruct, a dataset of 33 million protein phenotype instructions, representing a comprehensive resource for multiscale protein phenotypes. By co-training a large language model with multimodal molecular encoders, ProCyon integrates phenotypic and protein data. A novel architecture and instruction tuning strategy allow ProCyon to process arbitrarily interleaved proteinand-phenotype inputs, achieve zero-shot task transfer, and generate free-form text phenotypes interleaved with retrieved protein sequence, structure, and drug modalities in a single unified model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Functionalization of CNFET arrays for chemical sensing</title>
<link href="https://hdl.handle.net/1721.1/163711" rel="alternate"/>
<author>
<name>Song, Jaekang</name>
</author>
<id>https://hdl.handle.net/1721.1/163711</id>
<updated>2025-11-18T06:26:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Functionalization of CNFET arrays for chemical sensing
Song, Jaekang
Practical deployment of gas sensors for general-purpose applications requires integrated chips that operate at room temperature. However, real-world implementation has been limited by challenges such as the integration of highly sensitive and selective sensors, as well as insufficient statistical validation. In this work, we present an integrated gas sensor array comprising 2048 carbon nanotube field-effect transistors (CNFETs), functionalized with conductive metal-organic frameworks (cMOFs) and metal nanoparticles. Our functionalization approach enhances sensor responses by up to two orders of magnitude and enables on-chip pattern generation. Furthermore, the large number of redundant sensors allows for statistically significant measurements. The improved sensitivity is attributed to increased Schottky barrier modulation. We also demonstrate the chip’s capability to classify bacteria and yeast based on the gas mixtures emitted from cultures grown on agar plates. This work highlights the potential of integrated gas sensors as a practical, rapid, and cost-effective approach for general gas sensing applications, including biomedical applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier</title>
<link href="https://hdl.handle.net/1721.1/163709" rel="alternate"/>
<author>
<name>Wang, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163709</id>
<updated>2025-11-18T06:26:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-efficiency, low-loss Floquet Josephson Traveling&#13;
Wave Parametric Amplifier
Wang, Jennifer
Advancing error-corrected quantum computing and fundamental science necessitates quantum-limited amplifiers with near-ideal quantum efficiency and multiplexing capability. However, existing solutions achieve one at the expense of the other; for example, Josephson traveling wave parametric amplifiers (JTWPAs) are highgain, broadband, and chip-based quantum amplifiers that conventionally incur a bandwidth-noise tradeoff. When operated at 20-dB gain and instantaneous bandwidths of a few GHz, JTWPAs typically reach near-quantum limited intrinsic efficiencies of 70% - 85% relative to that of an ideal phase-preserving quantum amplifier. This is due to information leakage to the sidebands of the JTWPA, which can be recovered by adiabatically transforming the input modes to Floquet modes of the system within the device. In this thesis, we experimentally demonstrate the first Floquet-mode travelingwave parametric amplifier (Floquet TWPA). Fabricated in a superconducting qubit process, this Floquet TWPA achieves minimal dissipation, quantum-limited noise performance, and broadband operation. Our device exhibits &gt; 20-dB amplification over a 3-GHz instantaneous bandwidth, &lt;0.5 -dB average in-band insertion loss, and the highest-reported intrinsic quantum efficiency for a TWPA of 92.1±7.6%, relative to an ideal phase-preserving amplifier. When measuring a superconducting qubit, our Floquet TWPA enables a system measurement efficiency of 65.1 ± 5.8%, the highest-reported in a superconducting qubit readout experiment utilizing phase-preserving amplifiers to the best of our knowledge. Finally, we discuss the noise limitations of our current experimental setup, as well as impedance matching strategies that will enable us to push towards ideal JTWPA performance. These general-purpose Floquet TWPAs are suitable for fast, high-fidelity multiplexed readout in large-scale quantum systems and future monolithic integration with quantum processors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Scalable Robot Learning without Physical Robots</title>
<link href="https://hdl.handle.net/1721.1/163708" rel="alternate"/>
<author>
<name>Park, Younghyo</name>
</author>
<id>https://hdl.handle.net/1721.1/163708</id>
<updated>2025-11-18T06:26:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Scalable Robot Learning without Physical Robots
Park, Younghyo
The development of generalist robots—capable of performing a wide range of tasks in diverse environments—requires large-scale datasets of robot interactions. Unlike language or vision domains, where data can be passively collected at scale, robotic data collection remains costly, labor-intensive, and constrained by physical hardware. This thesis explores two complementary directions to overcome this challenge. First, we examine the limitations of training robots from scratch using reinforcement learning (RL). While RL has achieved promising results in simulation, its scalability is hindered by a largely overlooked bottleneck: environment shaping. Designing suitable rewards, action and observation spaces, and task dynamics typically requires extensive human intervention. We formalize environment shaping as a critical optimization problem and introduce tools and benchmarks to study and eventually automate this process, a necessary step toward general-purpose RL. Second, we introduce an alternative paradigm for robot data collection that does not rely on real-world robots. Using the Apple Vision Pro, we develop DART, an augmented reality (AR) teleoperation platform that streams human hand motions to cloud-hosted robot simulations. This setup enables scalable, low-latency collection of high-quality robot demonstrations without the overhead of physical setup or maintenance. Our user studies show that DART more than doubles data collection throughput while reducing operator fatigue, and policies trained in simulation using this data successfully transfer to the real world. Together, these contributions address two key bottlenecks in robot learning: the human effort required for RL environment design, and the dependence on physical robots for data. They lay the groundwork for scalable, accessible approaches to training generalist robot models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications</title>
<link href="https://hdl.handle.net/1721.1/163707" rel="alternate"/>
<author>
<name>Golden, Courtney K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163707</id>
<updated>2025-11-18T06:27:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Reconfigurable, Distributed-Memory Accelerator for Sparse Applications
Golden, Courtney K.
Iterative sparse matrix computations lie at the heart of many scientific computing and graph analytics algorithms. On conventional systems, their irregular memory accesses and low arithmetic intensity create challenging memory bandwidth bottlenecks. To overcome such bottlenecks, distributed-SRAM architectures use tiled arrays of high-bandwidth local storage to achieve very high aggregate memory bandwidth. However, current distributedSRAM architectures suffer from either poor programmability due to over-specialization or poor compute performance due to inefficient general-purpose hardware. This thesis proposes Quartz, a new architecture that uses short dataflow tasks and reconfigurable compute in a distributed-SRAM system to deliver both high performance and high programmability. Unlike traditional sparse CGRAs or on-die reconfigurable engines, Quartz allows reconfigurable compute to be highly utilized and scaled by (1) providing high memory bandwidth to each processing element and (2) introducing a task-level dataflow execution model that fits this new setting. Our execution model dynamically reconfigures tile hardware based on inter-tile messages to execute tasks on local data with fine-grained data partitioning across tiles. To make execution efficient, we explore novel data partitioning techniques that use graph and hypergraph partitioning to minimize network traffic and balance load. This is especially challenging for computations where one operand’s sparsity pattern (i.e., distribution of nonzeros) exhibits dynamic behavior across iterations, and we are the first to provide techniques to address this case. To ensure programmability, we show how a wide range of computations (expressed in an extended version of tensor algebra’s Einsum notation) and flexible data distributions can be systematically captured in small tasks for execution on Quartz. We evaluate Quartz in simulation, using an 8-chiplet design with 2,048 tiles and 824 MB of SRAM per chiplet, running six different iterative sparse applications from scientific computing and graph analytics. Quartz’s architecture, data partitioning techniques, and programming model together achieve gmean 26.2× speedup over the prior state-of-the-art programmable distributed-SRAM architecture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials</title>
<link href="https://hdl.handle.net/1721.1/163706" rel="alternate"/>
<author>
<name>Gupta, Ayush Sagar</name>
</author>
<id>https://hdl.handle.net/1721.1/163706</id>
<updated>2025-11-18T06:27:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dipole Contact Engineering for Field-Effect Transistors&#13;
Based on Two-Dimensional Materials
Gupta, Ayush Sagar
In the next several years and decades, the expanded use of artificial intelligence and edge computing will demand more powerful and energy-efficient electronics. Two-dimensional (2D) semiconductors, and in particular transition metal dichalcogenides (TMDs) such as molybdenum disulfide (MoS₂), are promising candidates for future field-effect transistors. TMDs can enable aggressive lateral and vertical device scaling, and they can add computing power density and new memory and sensing capabilities via 3D integration. However, several key challenges remain before 2D-channel transistors become commercially viable, including large contact resistances at the source and drain due to the van der Waals surface of 2D materials and the Fermi level pinning effect. A variety of methods have been explored to make ohmic contacts to MoS₂, the most promising of which so far is to use semimetals such as Bi and Sb, however these materials suffer from thermal instability. This thesis addresses these challenges by (1) exploring the ultimate limit of contact metal workfunction scaling to better understand the metal-MoS₂ interface, and (2) introducing a new method of reducing contact resistance to 2D materials by inserting dipole layers at the contact interface. Initial work on ultralow-workfunction (ULWF) metal deposition on MoS₂ and subsequent device fabrication is presented, though further study is required to mitigate effects from deposition equipment and the reactive nature of these metals. In parallel, the Janus TMD MoSSe is explored as an example system for dipole contacts, with extensive material characterization of the Janus TMD MoSSe being performed, and the effect of a dipole layer on the contact properties of FETs being established. Together, these results are a significant step towards solving one of the major hurdles for the commercial introduction of 2D-channel transistors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Specialization of Vision Representations with Personalized&#13;
Synthetic Data</title>
<link href="https://hdl.handle.net/1721.1/163705" rel="alternate"/>
<author>
<name>Chae, Nayoung (Julia)</name>
</author>
<id>https://hdl.handle.net/1721.1/163705</id>
<updated>2025-11-18T06:26:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Specialization of Vision Representations with Personalized&#13;
Synthetic Data
Chae, Nayoung (Julia)
Modern vision models excel at general purpose downstream tasks. It is unclear, however, how they may be used for personalized vision tasks, which are both fine-grained and data-scarce. Recent works have successfully applied synthetic data to general-purpose representation learning, while advances in Text-to-Image (T2I) diffusion models have enabled the generation of personalized images from just a few real examples. Here, we explore a potential connection between these ideas, and formalize the challenge of using personalized synthetic data to learn personalized representations, which encode knowledge about an object of interest and may be flexibly applied to any downstream task relating to the target object. We introduce an evaluation suite for this challenge, including reformulations of two existing datasets and a novel dataset explicitly constructed for this purpose, and propose a contrastive learning approach that makes creative use of image generators. We show that our method improves personalized representation learning for diverse downstream tasks, from recognition to segmentation, and analyze characteristics of image generation approaches that are key to this gain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Microservice Design Parameters</title>
<link href="https://hdl.handle.net/1721.1/163704" rel="alternate"/>
<author>
<name>Chen, Qihang</name>
</author>
<id>https://hdl.handle.net/1721.1/163704</id>
<updated>2025-11-18T06:27:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Microservice Design Parameters
Chen, Qihang
Production-level cloud services are increasingly deployed as microservices. An important question is given application logic, how to design an effective microservice architecture. Existing studies have underscored the importance of microservice cohesiveness and coupling, using these metrics to drive automatic design optimizations. However, they have not accounted for the potential impact that such design changes may have on overall system performance, which is confirmed by our case study. In this work, we present a system that can automatically identify microservice designs that are well-balanced across performance, coupling, and cohesiveness to meet cloud provider’s requirements. the system uses a multi-round dynamic programming approach, selectively identifies promising design candidates, generates the corresponding microservice code, measures and compares the results to ultimately determine the optimal design. The designs produced by our system typically achieve over 20% throughput improvement under the same QoS with less than a 10% increase in average LCOM, and often outperform the original benchmark architectures across all evaluated metrics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River</title>
<link href="https://hdl.handle.net/1721.1/163703" rel="alternate"/>
<author>
<name>Martínez Chapa, Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/163703</id>
<updated>2025-11-18T06:27:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">City in the River: Regeneration of the Santa Catarina River as an Intermittent Urban River
Martínez Chapa, Daniela
Full of dichotomies, the Santa Catarina River is both dry and wet, present but forgotten, central yet disconnected, valued yet feared. How should an intermittent river in a dense urban context be regenerated? This thesis reimagines its ecological, hydrological, and public potential. Set in Monterrey, Mexico, this research addresses the urgent need to rethink water management in the face of the intensifying climate crisis through different urban systems and regeneration strategies within the river basin. Focusing on the Santa Catarina River, long dismissed as a plot, void, or threat, this work proposes how an intermittent river might be re-understood not as an absence of activities or function but as a space of seasonal abundance, ecological possibility, and urban interaction. Historically engineered for control, the river has been used as a flood channel, markets, sports complexes, transportation corridors, and more. However, rarely has it been seen, treated, or protected as a river. Through the development of a pilot zone, this research suggests a replicable framework of regenerative strategies to slow down, retain, and absorb water flows, supporting both dry and wet season dynamics. These include restoring riparian ecologies, reintroducing soft edges, enabling groundwater recharge, and designing permeable, public, and accessible urban interventions that reconnect the city with the riverbed. This thesis is not a fixed proposal but a living toolkit, an adaptable model to be tested, expanded, and reimagined in the pilot as time and nature take over. At stake is not only the river’s future but also the city’s capacity to shift from resistance to relation, becoming one with it, becoming a city in the river.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Banjiha Stories (2025)</title>
<link href="https://hdl.handle.net/1721.1/163702" rel="alternate"/>
<author>
<name>Park, Habin</name>
</author>
<id>https://hdl.handle.net/1721.1/163702</id>
<updated>2025-11-18T06:27:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Banjiha Stories (2025)
Park, Habin
Banjiha are everywhere in Seoul. You don’t always see them—tucked below eye level, half-hidden underground—but they’re there. First built as military bunkers after the Korean War, later turned into last-resort housing, banjiha have become symbols of urban failure—spaces of neglect, flooding disasters, a problem to be erased. Both media portrayals and policy responses have advocated for their disappearance. But does removal truly protect the people who call these spaces home? This thesis moves beyond the idea that banjiha are simply failures of the city. Through three homes —three lives, it traces how these spaces are shaped, not only by policies and architecture but by the people who inhabit them. A home vulnerable to flooding, where protections exist—but not with the greatest risk. A place worn by time, held together by quiet repairs. A financial foothold in a city where affordable housing is disappearing. A space of temporary sacrifice. A shelter to return to, again and again. This is not just a story of risk or resilience, neglect or demolition. It is a story of how people live; how they adapt, negotiate, and make do in spaces that were never designed with them in mind. Rather than asking how to erase banjiha, this thesis asks: What can we learn by noticing them? What would it mean to shift the conversation—from removal to recognition, from assumption to understanding? To see these homes is to recognize not just their constraints, but the small interventions that could reshape them: a door that opens both ways so no one is trapped, policies that hold upstairs owners accountable for leaks, materials layered to prevent mold rather than mask it. Not grand reinventions, but deliberate shifts—openings for a different way forward. But before deciding what must change, we must first learn to see.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Inference for Inference Time Scaling of Language Models</title>
<link href="https://hdl.handle.net/1721.1/163701" rel="alternate"/>
<author>
<name>Puri, Isha</name>
</author>
<id>https://hdl.handle.net/1721.1/163701</id>
<updated>2025-11-18T06:26:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Probabilistic Inference for Inference Time Scaling of Language Models
Puri, Isha
Large language models (LLMs) have achieved significant performance gains via scaling up model sizes and/or data. However, recent evidence suggests diminishing returns from such approaches, motivating a pivot to scaling test-time compute. Existing deterministic inference-time scaling methods, usually with reward models, cast the task as a search problem, but suffer from a key limitation: early pruning. Due to inherently imperfect reward models, promising trajectories may be discarded prematurely, leading to suboptimal performance. We propose a novel inference-time scaling approach by adapting particle-based Monte Carlo methods. Our method maintains a diverse set of candidates and robustly balances exploration and exploitation. Our empirical evaluation demonstrates that our particle filtering methods have a 4–16x better scaling rate over deterministic search counterparts on both various challenging mathematical and more general reasoning tasks. Using our approach, we show that Qwen2.5-Math-1.5B-Instruct surpasses GPT-4o accuracy in only 4 rollouts, while Qwen2.5-Math-7B-Instruct scales to o1 level accuracy in only 32 rollouts. Our work not only presents an effective method to inference-time scaling, but also connects rich literature in probabilistic inference with inference-time scaling of LLMs to develop more robust algorithms in future work. Code, videos, and further information available at probabilistic-inference-scaling.github.io/
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Systematic Integration of Inverter-Based Resources in Electricity Markets</title>
<link href="https://hdl.handle.net/1721.1/163700" rel="alternate"/>
<author>
<name>Pierre, Jordina</name>
</author>
<id>https://hdl.handle.net/1721.1/163700</id>
<updated>2025-11-18T06:26:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward Systematic Integration of Inverter-Based Resources in Electricity Markets
Pierre, Jordina
This thesis introduces a multi-layer control architecture for inverter-based resources (IBRs), separating fast local feedback control from slower self-dispatch and system-level market coordination. Existing integration methods for IBRs limit their control flexibility and completely restrict their market participation potential. Two common practices include treatment of IBRs as negative loads and setting a fixed power factor during grid commissioning. Modeling IBRs as negative loads excludes them from dispatch coordination in electricity markets, significantly limiting incentive for contribution to grid reliability and flexibility. Likewise, a fixed power factor prevents the IBR from providing voltage support through reactive power absorption/injection. With a fixed power factor, constant real and reactive power limits are imposed on the inverter, even during voltage transients, ignoring the fact that an inverter’s available capacity can vary significantly due to internal current constraints and the power provided by the renewable energy source. To address the need for reactive power adjustment in IBRs and pave the way for their active participation in electricity markets , this work presents a coordinated control approach that enables IBRs to transition into active, self-dispatching participants. This thesis proposes a first layer hybrid PLL plus Q-V droop based controller in the first layer which governs millisecond-scale autonomous behavior, including low-voltage ride-through and real-time power adjustment based on voltage deviations at the point of common coupling and irradiance fluctuations from the renewable energy source, in this case solar. Given implementation from the first layer and predicted irradiance, Layer 2, which will be implemented in future work, uses a model predictive controller to provide bid functions for both real and reactive power while keeping voltage at the Point of Common Coupling within its limits. Finally, the third layer performs centralized market clearing through a security-constrained optimization by the system operator. By advocating for self-dispatched, constraint aware control, this thesis challenges the prevailing passive modeling paradigm and offers a structured, physics-informed alternative. It demonstrates how IBRs can evolve into reliable, market-integrated assets, enabling smarter renewable integration and a more resilient, cost-effective and decarbonized grid.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approximations to worst-case data dropping: unmasking failure modes</title>
<link href="https://hdl.handle.net/1721.1/163699" rel="alternate"/>
<author>
<name>Huang, Jenny Yijian</name>
</author>
<id>https://hdl.handle.net/1721.1/163699</id>
<updated>2025-11-18T06:28:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Approximations to worst-case data dropping: unmasking failure modes
Huang, Jenny Yijian
A data analyst might worry about generalization if dropping a very small fraction of data points from a study could change its substantive conclusions. Checking this non-robustness directly poses a combinatorial optimization problem and is intractable even for simple models and moderate data sizes. Recently various authors have proposed a diverse set of approximations to detect this non-robustness. In the present work, we show that, even in a setting as simple as ordinary least squares (OLS) linear regression, many of these approximations can fail to detect (true) non-robustness in realistic data arrangements. We focus on OLS in the present work due its widespread use and since some approximations work only for OLS. Of the approximations that do not fail our tests, we find not only that a simple recursive greedy algorithm is the most conceptually straightforward but also that it can be orders of magnitude faster to run than the others.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics</title>
<link href="https://hdl.handle.net/1721.1/163698" rel="alternate"/>
<author>
<name>Darmawi-Iskandar, Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163698</id>
<updated>2025-11-18T06:28:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Highly Scaled p-GaN-gate HEMTs for Low Voltage Power Electronics
Darmawi-Iskandar, Patrick
Rising global energy demands, driven by the advent of artificial intelligence (AI), cloud computing, and Internet of Things (IoT) devices, underscore the need for more efficient power electronics. In particular, power switches based on wide bandgap semiconductors such as gallium nitride (GaN) have emerged as promising alternatives to traditional silicon devices for low-voltage (10-100 V) applications. This work investigates the design, fabrication, and scaling of p-GaN-gate highelectron-mobility transistors (HEMTs). A p-GaN-gate epitaxial structure was developed with considerations for short channel effects. A self-aligned, gate-first process employing tungsten metallization was implemented to enable gate lengths as small as 100 nm. Device scaling was studied systematically, revealing the importance of gate aspect ratio and gate-to-drain spacing in managing short channel effects and maintaining breakdown voltage. Electrical characterization showed strong device performance, although contact resistance accounted for a substantial portion of total on-resistance. To address this, a modified fabrication approach incorporating regrown contacts was introduced, resulting in reduced contact resistance and improved overall device characteristics. The combined results highlight practical strategies for enhancing the performance and scalability of p-GaN-gate HEMTs for next-generation low-voltage power electronics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62</title>
<link href="https://hdl.handle.net/1721.1/163697" rel="alternate"/>
<author>
<name>Li, Tien Yi</name>
</author>
<id>https://hdl.handle.net/1721.1/163697</id>
<updated>2025-11-18T06:28:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modelling Diarists: Diary-writing and Moral Anxieties in China, 1918–62
Li, Tien Yi
This thesis is a history of diary-writing in China from 1918 through 1961. Diaries are an increasingly popular but still inadequately understood primary source for historians of modern China. Previous scholars have suggested that, in the twentieth century, diary-writing became increasingly popular due to Japanese and Soviet influences, the increasing availability of manufactured blank diaries, and ruling governments that used diary-writing as a way of enforcing ideological conformity. This thesis traces an alternative history, starting from the popularization of published diaries in Shanghai in the long 1920s; to diaries’ emergence as a recognizable genre that could discoursed be theorized; to the moment the genre gained its reputation as a kind of self-expression par excellence; to its widespread inclusion into school curricula; to loosely connected attempts on the part of educators to delimit a normative way of diarywriting that, ironically, increasingly regimented self-expression. In doing so, this thesis contributes to the existing historiography by offering three correctives: I argue that 1) the initial proliferation of diaries was economically––not ideologically––motivated, 2) the popularization of diary-writing was not a concerted effort orchestrated by China’s political leaders but at best a loosely connected effort led by a middling class of educators, textbook writers, and intellectuals, and 3) diary-writing was not only regimented by communist ideology in the Maoist era but shifting moral principles and anxieties throughout the twentieth century. All in all, this thesis demonstrates the value of diaries for studying moral knowledge, epistemologies, and anxieties at the grassroots in midcentury China.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners</title>
<link href="https://hdl.handle.net/1721.1/163695" rel="alternate"/>
<author>
<name>Koo, Jaehyun</name>
</author>
<id>https://hdl.handle.net/1721.1/163695</id>
<updated>2025-11-18T06:27:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parallel Batch-Dynamic Graph Algorithms: Coreness Decomposition and Spanners
Koo, Jaehyun
This thesis contributes to the burgeoning field of batch-dynamic parallel algorithms by presenting parallel batch-dynamic graph algorithms for coreness decomposition and spanners, as well as a number of other related problems. The first class of problems we consider involves approximating coreness decomposition and several closely related concepts, such as (subgraph) density estimation, arboricity estimation, and low out-degree orientations. These are extremely useful structures for organizing graphs based on their density. Our algorithms process any batch of edge insertions and deletions in polylogarithmic depth while using work that is linear in the batch size (up to logarithmic factors), in the worst case. The second class of problems we consider concerns graph spanners. Over the past two to three decades, graph sparsifications that approximately preserve key graph properties have become essential tools in algorithm design. In particular, spanners—reducing the number of edges while approximately preserving pairwise distances—have been widely studied. We present the first such algorithms for computing and maintaining spanners. These algorithms achieve near-optimal amortized runtime—processing each batch in polylogarithmic depth with work nearly linear in the batch size for any number of processors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163694" rel="alternate"/>
<author>
<name>Fey, Nolan</name>
</author>
<id>https://hdl.handle.net/1721.1/163694</id>
<updated>2025-11-18T06:27:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation
Fey, Nolan
Achieving athletic loco-manipulation on robots requires moving beyond traditional tracking rewards—which simply guide the robot along a reference trajectory—to task rewards that drive truly dynamic, goal-oriented behaviors. Commands such as “throw the ball as far as you can” or “lift the weight as quickly as possible” compel the robot to exhibit the agility and power inherent in athletic performance. However, training solely with task rewards introduces two major challenges: these rewards are prone to exploitation (reward hacking), and the exploration process can lack sufficient direction. To address these issues, we propose a two-stage training pipeline. First, we introduce the Unsupervised Actuator Net (UAN), which leverages real-world data to bridge the sim-to-real gap for complex actuation mechanisms without requiring access to torque sensing. UAN mitigates reward hacking by ensuring that the learned behaviors remain robust and transferable. Second, we use a pre-training and fine-tuning strategy that leverages reference trajectories as initial hints to guide exploration. With these innovations, our robot athlete learns to lift, throw, and drag with remarkable fidelity from simulation to reality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Wound Designates a Subject</title>
<link href="https://hdl.handle.net/1721.1/163693" rel="alternate"/>
<author>
<name>Lum, Luca E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163693</id>
<updated>2025-11-18T06:28:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Wound Designates a Subject
Lum, Luca E.
What haunts when haunting itself has been foreclosed? This thesis develops “ghostlessness” as a conceptual and aesthetic framework across my work in moving image, drawing, and writing. Ghostlessness refers to conditions that suppress haunting where it would otherwise emerge or be felt. Drawing from theoretical elaborations of hauntology, where the present is understood as structured by both suppressed pasts and unrealized futures, ghostlessness names the absence—or foreclosure—of that temporal disruption. It marks a contemporary condition in which systems oriented toward predictive governance and managed futurity preemptively neutralize rupture, sealing wounds before they can fester, reroute, or become sites of transformation. Through the works gathered here, I explore how ghostlessness functions not simply as absence but as affective and infrastructural suppression—rendering the spectral illegible, unaddressable, or unreal. Against this, my practice seeks to recapture the value of haunting in death-ridden, crisis-laden times where its presence is more prevalent than ever – hence its management, erasure, and suppression: ghostlessness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stylizing 3D Models With Generative AI for Fabrication</title>
<link href="https://hdl.handle.net/1721.1/163692" rel="alternate"/>
<author>
<name>Tejedor, Leandra</name>
</author>
<id>https://hdl.handle.net/1721.1/163692</id>
<updated>2025-11-18T06:28:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stylizing 3D Models With Generative AI for Fabrication
Tejedor, Leandra
This thesis presents two novel approaches for modifying 3D models using generative AI for stylization while ensuring the resulting models preserve the properties required for fabrication. The first method, Style2Fab, separates functional and stylistic sections of 3D models to enable targeted modifications that preserve the model's intended functionality. By distinguishing between these sections, Style2Fab allows for alterations that maintain the model's functional purpose while providing flexibility in its aesthetic design. This approach ensures that the modified models retain their original functionality after stylistic changes.&#13;
&#13;
The second method, MechStyle, incorporates finite element analysis (FEA) into the generative modeling pipeline to maintain the structural integrity of the modified models. By analyzing changes in stress values during a simulated drop test at various stages of the stylization process, MechStyle restricts changes to those that preserve the model's structural viability. This ensures that the resulting models are both stylistically accurate to the user's desired results and structurally sound for 3D printing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Recovering Planted Subgraphs</title>
<link href="https://hdl.handle.net/1721.1/163691" rel="alternate"/>
<author>
<name>Rajaraman, Amit</name>
</author>
<id>https://hdl.handle.net/1721.1/163691</id>
<updated>2025-11-18T06:28:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Limits of Recovering Planted Subgraphs
Rajaraman, Amit
Given an arbitrary subgraph H = Hₙ and p = pₙ ∈ (0, 1), the planted subgraph model is defined as follows. A statistician observes the union of the “signal,” which is a random “planted” copy H* of H, together with random noise in the form of an instance of an Erdős–Rényi graph ´ G(n, p). Their goal is to then recover the planted H* from the observed graph. Our focus in this work is to understand the minimum mean squared error (MMSE), defined in terms of recovering the edges of H*, as a function of p and H, for large n. A recent paper [MNS⁺23] characterizes the graphs for which the limiting (as n grows) MMSE curve undergoes a sharp phase transition from 0 to 1 as p increases, a behavior known as the all-or-nothing phenomenon, up to a mild density assumption on H. However, their techniques fail to describe the MMSE curves for graphs that do not display such a sharp phase transition. In this paper, we provide a formula for the limiting MMSE curve for any graph H = Hₙ, up to the same mild density assumption. This curve is expressed in terms of a variational formula over pairs of subgraphs of H, and is inspired by the celebrated subgraph expectation thresholds from probabilistic combinatorics [KK07]. Furthermore, we give a polynomial-time description of the optimizers of this variational problem. This allows one to efficiently approximately compute the MMSE curve for any dense graph H when n is large. The proof relies on a novel graph decomposition of H as well as a new minimax theorem which may be of independent interest. Our results generalize to the setting of minimax rates of recovering arbitrary monotone boolean properties planted in random noise, where the statistician observes the union of a planted minimal element A ⊆ [N] of a monotone property and a random Ber(p)^⊗N vector. In this setting, we provide a variational formula inspired by the so-called “fractional” expectation threshold [Tal10], again describing the MMSE curve (in this case up to a multiplicative constant) for large n.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network</title>
<link href="https://hdl.handle.net/1721.1/163690" rel="alternate"/>
<author>
<name>Liu, Ziqian</name>
</author>
<id>https://hdl.handle.net/1721.1/163690</id>
<updated>2025-11-18T06:28:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Routing in the CityMesh Decentralized Fallback&#13;
Wireless Network
Liu, Ziqian
As modern communication systems increasingly rely on centralized network infrastructure, they become more vulnerable to disruptions caused by disasters, failures, or cyberattacks. To address this risk, CityMesh proposes a decentralized fallback wireless network that leverages existing Wi-Fi devices, such as access points (APs), in buildings to maintain essential connectivity during outages. However, achieving scalable and reliable message delivery in such a network, without introducing excessive overhead, poses significant challenges. This thesis presents a new routing protocol for CityMesh, designed to operate efficiently at city scale. We first identify the limitations of traditional shortest-path source routing in CityMesh’s context, including the use of unreliable links and overhead from redundant transmissions. To address these issues, we introduce a safer path selection metric that prioritizes link reliability, a waypoint-based routing compression scheme, and a conduit mechanism to increase robustness to local failures. Our protocol further supports compact routing tables through a grid-based addressing scheme, enabling constant-size packet headers and scalable routing decisions. Additionally, we propose a suppression strategy to reduce unnecessary transmissions both between and within buildings. Finally, we extend our approach to reconnect disconnected network segments by formulating a relay placement strategy based on map data and geometric heuristics. Additionally, to reconnect fragmented network segments, we develop a practical relay placement algorithm by leveraging on the convex hull optimization and re-using global map knowledge, which ensures fast relay point computation in feasible locations such as roads and bridges. Simulations across 20 global cities show that our routing protocol achieves up to 2× higher packet delivery rates and reduces transmission overhead by up to 28× compared to GPSR under high packet loss and realistic localization error. The routing table footprint sampled across 4 randomly selected cities shows on average under 2 KB memory usage per device. Our fast relay placement algorithm also demonstrates only a small number of relays are needed to achieve full network connectivity for most of the cities, which validates CityMesh’s core premise that existing urban Wi-Fi infrastructure is sufficient to support a robust, scalable decentralized fallback network with minimal augmentation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GPU-accelerated Inference for Discrete Probabilistic Programs</title>
<link href="https://hdl.handle.net/1721.1/163689" rel="alternate"/>
<author>
<name>Ghavami, Matin</name>
</author>
<id>https://hdl.handle.net/1721.1/163689</id>
<updated>2025-11-18T06:28:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GPU-accelerated Inference for Discrete Probabilistic Programs
Ghavami, Matin
This thesis presents a comprehensive approach to GPU-accelerated inference for discrete probabilistic programs.  We make two key contributions : (1) a factor graph IR implemented in JAX that supports variable elimination and Gibbs sampling, and (2) a modeling DSL with a compiler that lowers programs to the factor graph IR. Our system enables significant performance optimizations through static analysis of the factor graph structure. Variable elimination is optimized by reduction to tensor contraction with optimized contraction paths, while Gibbs sampling is automatically parallelized through graph coloring techniques. Empirical evaluations on standard benchmarks demonstrate orders of magnitude performance improvements over existing systems, with the parallelized Gibbs sampler showing speed-ups of up to 144x on Bayesian networks and even greater improvements for models with regular graph topologies such as Ising models and hidden Markov models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures</title>
<link href="https://hdl.handle.net/1721.1/163688" rel="alternate"/>
<author>
<name>Hernandez-Cornejo, Mark A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163688</id>
<updated>2025-11-18T06:27:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Vernaculars of Our Networks: From The Cloud to a Plurality of Grassroots Digital Infrastructures
Hernandez-Cornejo, Mark A.
This thesis is concerned with DIY "off-the-cloud" networks as socio-technical models that can reinscribe a community's organizational processes, identity, and culture. It questions how these networks can break away from corporate and extractive services of "the cloud" in order to achieve digital sovereignty as well as resist the hegemonic understanding of Western universal technology. Rather than grafting an outside network onto a community, how might the nodes of a network emerge from the cultural ontologies and local knowledge systems, creating a "vernacular cloud," with political, epistemic, and ontological implications? The social practice of what I call 'net/work' involves the facilitation of local digital territories that create a grassroots politics of "organic internets." In Chapter One, recent attempts to break from monopolized services like Google and Facebook are examined, providing insight into why these networks are formed and how they “de-link” from “the cloud.” Drawing from Walter Mignolo's understanding of "de-linking," the thesis argues that this process is a political project that is also epistemologically and economically non-western. Chapter Two examines the notion of 'community' in community networks through the lens of grassroots organizing such as mutual aid, delving into the care and maintenance required for system administration. Chapter Two builds on Geri Augusto's understanding of "re/trans" as a project that has developed new assemblages of knowledge and integrated them into different landscapes. It examines community networks from the Global South, where network nodes have the potential to be cosmo-ontological. Chapter Three provides examples of the principles outlined in Chapters One and Two from my work in pursuit of technical autonomy within an organization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings</title>
<link href="https://hdl.handle.net/1721.1/163687" rel="alternate"/>
<author>
<name>Lesina-Debiasi, Simon</name>
</author>
<id>https://hdl.handle.net/1721.1/163687</id>
<updated>2025-11-18T06:27:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensing Buildings: Environmental Impact of Sensor Technologies&#13;
and Data Infrastructure in Buildings
Lesina-Debiasi, Simon
Building operations and the construction sector are one of the largest contributors to global carbon emissions and energy consumption. While novel construction materials and insulation offer lower embodied carbon solutions, improved heating and cooling devices offer cost and energy effective building services. Above all, “smart” devices promise remote control, oversight, and optimization of building operations. With the rising implementation of AI solutions to every sector, it is important to see the digital devices as an interface to the material machinery they are connected to. The way through which we are introduced to these systems as solutions to environmental problems leaves out the operational and infrastructural costs of the devices. Making material design decisions that are conscious of the mining operations that source the rare earth minerals, to the pumping of oil for polymer coatings, to the chemical baths that separate it from the ore, all the way to the hard drives in server rigs that are cooled with water and driven by electricity, the cloud is nothing but materiality and resources. When evaluating buildings operations and construction techniques for sustainability considerations and environmental impact, connected services such as data networks and optimizations that rely on large server infrastructures and cloud computing are not part of the scope. This thesis reveals the missing components of energy evaluations in “smart” devices within the walls, floors, windows, doors, and roofs of our building, to create a framework through which building efficiency and sustainability can be reconsidered. Through historic research, literature reviews, and experiments, this work shines some light on the environmental impact of data infrastructure to which our buildings are connected. The work presented in this thesis does not claim to be comprehensive nor to solve the problem of optimizing buildings for energy efficiency. Instead, the goal is to build upon existing and established research on data infrastructure, smart technology, climate research etc. showing that, while the efforts currently taken might be improving the efficiency in a building on-site, considerations that are impacting the energy consumption off site need to be taken into consideration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies</title>
<link href="https://hdl.handle.net/1721.1/163686" rel="alternate"/>
<author>
<name>Ramirez Cuebas, Adriana</name>
</author>
<id>https://hdl.handle.net/1721.1/163686</id>
<updated>2025-11-18T06:27:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization strategies for North American urban landscapes:                                   &#13;
Evaluating pavements and vegetation across design typologies
Ramirez Cuebas, Adriana
Urban landscapes are increasingly recognized as critical to climate mitigation, yet remain underrepresented in carbon accounting frameworks relative to buildings and infrastructure. This thesis advances landscape carbon assessment by introducing a typology-based Life Cycle Assessment (LCA) framework for landscape architecture. &#13;
The framework integrates anthropogenic emissions and natural carbon dynamics while addressing uncertainty. It proceeds through three layers of analysis: 1) developing landscape system and project categories for carbon footprint benchmarking, 2) benchmarking the performance of the proposed landscape systems and urban typologies; and 3) assessing the mitigation potential of decarbonization strategies across systems and project types.&#13;
Concrete pavers on reinforced concrete slabs and asphalt pavements (78 to 104 kgCO₂e/m²) are the most carbon intensive in the production-to-construction stage. Turfgrass and shrubs show wide variability, functioning as sources or sinks depending on species mix, maintenance, and flux magnitudes, underscoring the need for species-specific, ecologically dynamic modeling (-21 to 42 kgCO₂e/m² and -35 to 258 kgCO₂e/m²). Canopy systems act as consistent carbon sinks (-611 to -388 kgCO₂e/m² over 50 years) despite significant emissions from transportation and structural soil.&#13;
Landscape systems were used to benchmark four urban typologies—streetscapes, plazas, courtyards, and urban parks. Their 50-year carbon footprints range from –80 to 21 kgCO₂e/m² in urban parks, –13 to 63 in courtyards, 22 to 79 in plazas, and 3 to 80 in streetscapes. Applying decarbonization strategies makes all typologies achieve net carbon sink status at the high bound. Urban parks achieve neutrality immediately post-construction, courtyards in 13 years, plazas in 26 years, and streetscapes by year 33. At higher emission estimates, urban parks and courtyards deepen carbon sink performance, plazas cross into net sink territory, and streetscapes approach neutrality. The detailed findings highlight the influence of planting density, maintenance regimes, and land cover composition.&#13;
By structuring assessment around land covers and urban typologies, this thesis delivers a transferable carbon accounting framework aligned with design practice, offering actionable insights for embedding climate accountability into landscape architecture and public policy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/163685" rel="alternate"/>
<author>
<name>Pahl, David</name>
</author>
<id>https://hdl.handle.net/1721.1/163685</id>
<updated>2025-11-18T06:27:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulation and Design of Quantum Processors for Low‑Overhead Quantum Error Correction
Pahl, David
This thesis investigates the simulation and design of the hardware architecture required for large‑scale quantum error correction (QEC). Specifically, we design microwave circuits for fast and high‑fidelity readout and devise a long‑range coupler (LRC) that spans five qubit lattice sites, suitable for low‑overhead quantum low‑density parity‑check (qLDPC) codes [1]. We present a prototypical nine‑qubit qLDPC code incorporating two long‑ range couplers and optimized readout circuits, achieving state‑of‑the‑art readout fidelities of up to 99.63% in 56 ns and demonstrating strong, well‑targeted couplings mediated by the LRC. Our simulations employ an efficient microwave abstraction based on ABCD transfer matrices, modeling complete qubit devices as networks of circuit elements. We use this formalism to develop a closed‑loop optimization algorithm that determines optimal readout parameters in seconds. The ABCD framework also accurately captures the multi‑mode behavior of the LRC, offering a valuable tool for developing large‑scale, low‑ overhead QEC devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost-Based Optimization for Semantic Operator Systems</title>
<link href="https://hdl.handle.net/1721.1/163684" rel="alternate"/>
<author>
<name>Russo, Matthew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163684</id>
<updated>2025-11-18T06:27:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Cost-Based Optimization for Semantic Operator Systems
Russo, Matthew D.
Recently, AI developers have turned to modular AI systems in order to achieve state-ofthe-art performance on challenging benchmarks and industry problems. New programming frameworks have enabled developers to build these systems by composing them out of semantic operators—i.e., LLM-powered maps, filters, joins, aggregations, etc.—inspired by relational operators from data management systems. While these systems of semantic operators can achieve strong performance on benchmarks, they can be difficult to optimize. For example, an optimizer may need to determine which model, prompting strategy, and retrieval mechanism to use for each operator. Existing optimizers are limited in the number of optimizations they can apply, and most (if not all) cannot optimize system quality, cost, or latency subject to constraint(s) on the other dimensions. In this thesis, we build an extensible, cost-based optimizer called Abacus, which searches for the best implementation of a semantic operator system given a (possibly constrained) optimization objective. The optimizer estimates operator performance by leveraging a minimal set of training examples and, if available, prior beliefs about operator performance. We evaluate the optimizer on a range of workloads including biomedical multi-label classification (BioDEX), information extraction from legal contracts (CUAD), and multi-modal question answering (MMQA). We demonstrate that systems optimized by our work achieve 18.7%-39.2% better quality and up to 23.6x lower cost and 4.2x lower latency than the next best system.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games</title>
<link href="https://hdl.handle.net/1721.1/163683" rel="alternate"/>
<author>
<name>Pipis, Charilaos</name>
</author>
<id>https://hdl.handle.net/1721.1/163683</id>
<updated>2025-11-18T06:27:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Learning and Computation of Linear Correlated Equilibrium in General Convex Games
Pipis, Charilaos
We propose efficient no-regret learning dynamics and ellipsoid-based methods for computing linear correlated equilibria—a relaxation of correlated equilibria and a strengthening of coarse correlated equilibria—in general convex games. These are games where the number of pure strategies is potentially exponential in the natural representation of the game, such as extensive-form games. Our work identifies linear correlated equilibria as the tightest known notion of equilibrium that is computable in polynomial time and is efficiently learnable for general convex games. Our results are enabled by a generalization of the seminal framework of Gordon et al. [2008] for Φ-regret minimization, providing extensions to this framework that can be used even when the set of deviations Φ is intractable to separate/optimize over. Our polynomial-time algorithms are similarly enabled by extending the Ellipsoid-Against-Hope approach of Papadimitriou and Roughgarden [2008] and its generalization to games of non-polynomial type proposed by Farina and Pipis [2024a]. We provide an extension to these approaches when we do not have access to the separation oracles required by these works for the dual player. This work will appear in STOC 2025, [Daskalakis et al., 2025].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees</title>
<link href="https://hdl.handle.net/1721.1/163682" rel="alternate"/>
<author>
<name>Lange, Jane</name>
</author>
<id>https://hdl.handle.net/1721.1/163682</id>
<updated>2025-11-18T06:27:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explaining Black-Box Classifiers by Implicitly Learning&#13;
Decision Trees
Lange, Jane
We present algorithms for finding two types of objects that explain the classification of a black-box model f : {±1}^d → {±1} on an instance x ∈ {±1}^d . The first is a certificate: a small set of x’s features that in conjunction essentially determines f(x). The second is a counterfactual: a nearest instance x′ for which f(x′) ≠ f(x). We obtain both algorithms via a connection to the problem of implicitly learning decision trees. The implicit nature of this learning task allows for efficient algorithms even when the complexity of f necessitates an intractably large surrogate decision tree. We solve the implicit learning task by bringing together techniques from learning theory, local computation algorithms, and complexity theory. Our approach of “explaining by implicit learning” shares elements of two previously disparate methods for post-hoc explanations, global and local explanations, and we make the case that it enjoys advantages of both. Our certification algorithm runs in time poly(d, C(f)) and outputs a certificate of size poly(C(f)), where C(f) is the “average certificate complexity" of f. Our counterfactual algorithm runs in time S(f)^[O(∆f (x))] ·log d, where S(f) is the sensitivity of f (a discrete analogue of the Lipschitz constant) and ∆f (x) is the distance from x to its nearest counterfactual. We further prove a lower bound of S(f)^[Ω(∆f (x))] + Ω(log d) for finding counterfactuals, thereby showing that the guarantees of our algorithm are essentially optimal.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices</title>
<link href="https://hdl.handle.net/1721.1/163681" rel="alternate"/>
<author>
<name>Lee, Jungsoo</name>
</author>
<id>https://hdl.handle.net/1721.1/163681</id>
<updated>2025-11-18T06:27:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analog On-chip Training and Inference with Non-volatile&#13;
Memory Devices
Lee, Jungsoo
As the demand for computation in neural networks continues to rise, conventional computing resources are increasingly constrained by their limited energy efficiency. One promising solution to this challenge is analog in-memory computing (AIMC), which enables efficient matrix-vector multiplications by encoding synaptic weights into the conductance of nonvolatile memory devices. These devices are structured into crossbar arrays. To explore the potential of non-volatile memory devices in AIMC, investigations involve simulating crossbar array operations using IBM’s AIHWKIT. With this tool, I investigate the implementation of various analog computing algorithms, including TikiTaka. AIMC is evaluated for simple MNIST classification tasks and more complex deep learning models, Long Short-Term Memory (LSTM) networks. I demonstrate that devices can be categorized based on their asymmetry and non-linear weight modulation behavior. Performance improvements through the Tikitaka algorithm are observed only when the device provides a sufficient converge-dragging force; otherwise, the algorithm may even degrade performance. I also investigate how pulse-to-pulse noise and device-to-device variability affect system performance, as well as how different peripheral circuit configurations influence the overall behavior. Finally, I propose an Analog Low-Rank Adapter (Analog LoRA) by applying analog computing to the fine-tuning of large language models. I explore the necessary conditions for Analog LoRA to achieve performance comparable to its digital counterpart. Based on these findings, I present design guidelines for effectively applying analog computing to various machine learning tasks on edge devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides</title>
<link href="https://hdl.handle.net/1721.1/163680" rel="alternate"/>
<author>
<name>Jiao, Yixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163680</id>
<updated>2025-11-18T06:26:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">CMOS-Compatible Wafer-Scale Synthesis and Rapid Characterization of Two-Dimensional Transition Metal Dichalcogenides
Jiao, Yixuan
Two-dimensional transition metal dichalcogenides (TMDs) such as monolayer MoS₂ offer great promise for next generation nanoelectronics due to their atomic thickness, tunable bandgaps, and excellent electrostatic control. However, industrial semiconductor manufacturing demands CMOS-compatible, wafer-scale growth and conventional CVD methods often exceed thermal budgets and introduce contaminants, while achieving uniform, defect-free monolayers remain difficult. This thesis presents in-depth discussion on low-temperature MOCVD system design and optimization methodology for uniform monolayer TMD synthesis. We investigate the effect of alkali halide promoters (e.g. NaCl) and novel alkali-free promoters (e.g. NH4Cl and crystal violet) on synthesis of monolayer MoS₂. By optimizing the NaCl-promoted route, we achieve coalesced monolayer MoS₂ films with enlarged grain domains and demonstrate field-effect transistors with improved mobility. In parallel, we develop a CMOS-compatible crystal violet seeding method that avoids alkali metal contaminants and yields uniform monolayer coverage. To support process development, a rapid characterization pipeline was introduced: optical/SEM imaging combined with machine learning to quickly map thickness, grain size, and infer electronic quality across the wafer. These contributions collectively advance the integration of 2D TMD materials into CMOS fabrication, enabling monolithic 3D integration in future electronics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating the Search for Artificial Life with Foundation&#13;
Models</title>
<link href="https://hdl.handle.net/1721.1/163679" rel="alternate"/>
<author>
<name>Kumar, Akarsh</name>
</author>
<id>https://hdl.handle.net/1721.1/163679</id>
<updated>2025-11-18T06:27:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automating the Search for Artificial Life with Foundation&#13;
Models
Kumar, Akarsh
With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields. Artificial Life (ALife) has not yet integrated FMs, thus presenting a major opportunity for the field to alleviate the historical burden of relying chiefly on manual design and trial-anderror to discover the configurations of lifelike simulations. This paper presents, for the first time, a successful realization of this opportunity using vision-language FMs. The proposed approach, called Automated Search for Artificial Life (ASAL), (1) finds simulations that produce target phenomena, (2) discovers simulations that generate temporally open-ended novelty, and (3) illuminates an entire space of interestingly diverse simulations. Because of the generality of FMs, ASAL works effectively across a diverse range of ALife substrates including Boids, Particle Life, Game of Life, Lenia, and Neural Cellular Automata. A major result highlighting the potential of this technique is the discovery of previously unseen Lenia and Boids lifeforms, as well as cellular automata that are open-ended like Conway’s Game of Life. Additionally, the use of FMs allows for the quantification of previously qualitative phenomena in a human-aligned way. This new paradigm promises to accelerate ALife research beyond what is possible through human ingenuity alone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-aware Joint Physical Tracking and Prediction</title>
<link href="https://hdl.handle.net/1721.1/163678" rel="alternate"/>
<author>
<name>Dasgupta, Arijit</name>
</author>
<id>https://hdl.handle.net/1721.1/163678</id>
<updated>2025-11-18T06:26:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Uncertainty-aware Joint Physical Tracking and Prediction
Dasgupta, Arijit
Humans possess a remarkable capacity to track and predict the motion of objects even when visual information is temporarily absent. This thesis investigates how missing sensory evidence—such as during occlusion—alters current and future beliefs about object motion, and introduces an uncertainty-aware framework to model this process. A behavioral experiment was conducted in which participants continuously predicted the future destination of a ball moving in 2.5D environments with occlusion. Results demonstrate that participants dynamically updated their predictions throughout occlusion, exhibiting adaptive belief revision and physically grounded reasoning. To model this behavior, a structured Bayesian modeling and inference approach for joint tracking and prediction was developed that integrates perception, state estimation, and future prediction in a unified process. The approach, implemented via a Sequential Monte Carlo algorithm embedded within a GPU-accelerated and parallel probabilistic programming system, maintains time-varying beliefs over both present and future object states, conditioned on observed images. These belief states are explicitly represented in symbolic form, enabling interpretable, frame-by-frame introspection of uncertainty and prediction over time. When compared against human responses, the model closely matched the temporal evolution of time-aligned decisions and outperformed plausible alternative hypotheses that failed to reason during occlusion. These findings affirm that the absence of changing visual evidence does not engender a void in physical reasoning, but is evidence in itself—processed and revised through structured, probabilistic inference. By integrating probabilistic programming with human behavioral data through structured Bayesian modeling and inference, this thesis advances a computational account of intuitive physical reasoning and provides a foundation for building interpretable, uncertainty-aware AI systems that mirror human-like physical intelligence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward an Age-Ready Suburbia</title>
<link href="https://hdl.handle.net/1721.1/163677" rel="alternate"/>
<author>
<name>Du, Minghao</name>
</author>
<author>
<name>Zhuang, Kaicheng</name>
</author>
<id>https://hdl.handle.net/1721.1/163677</id>
<updated>2025-11-18T06:27:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward an Age-Ready Suburbia
Du, Minghao; Zhuang, Kaicheng
As America’s population ages, suburban neighborhoods face urgent challenges. Originally designed for young, car-dependent families, the suburban landscape today often presents barriers to aging in place, including poor walkability, inaccessible housing, and limited access to essential services and care. This thesis investigates these challenges and proposes a strategy for reimagining suburban environments through demographic analysis, spatial mapping, persona-driven research, architectural prototyping, and community planning. It traces the historical evolution of suburbia, critically evaluates existing senior housing typologies, and advances new frameworks for retrofitting residential neighborhoods to better support aging populations. Focusing on Sacramento, California, the research identifies high-priority areas where aging, affordability challenges, and mobility barriers intersect. Grounded by a pilot care home project, the study demonstrates how modest interventions, such as retrofitting single-family homes into small-scale residential care environments, can enhance both livability and care access. The first phase of the pilot project has been constructed, offering a demonstration of the proposed model’s feasibility. A phased development and financial strategy are also outlined to ensure broader applicability. While rooted in Sacramento, the thesis offers a framework relevant to many suburban contexts across the United States, particularly naturally occurring retirement communities (NORCs) where older adults are aging in place. Rather than creating isolated senior enclaves, the work promotes a distributed, community-integrated model that strengthens neighborhood resilience and supports intergenerational living. By combining design innovation with policy awareness and development feasibility, the thesis presents a scalable and adaptable approach to reshaping suburbs for an aging society.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/163676" rel="alternate"/>
<author>
<name>Pahl, Lukas</name>
</author>
<id>https://hdl.handle.net/1721.1/163676</id>
<updated>2025-11-18T06:27:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Calibration and Control of Superconducting Qubits for Low‑Overhead Quantum Error Correction
Pahl, Lukas
The ability to coherently and reliably manipulate quantum information marks a fundamental technological leap—realizable through a universal, fault‑tolerant quantum computer. Achieving this goal requires progress across all layers of the quantum computing stack, from physical qubits to theoretical algorithms. In this work, we address multiple layers of this stack. We develop a software architecture for scalable device calibration using modular calibration graphs. We introduce real‑time frequency stabilization techniques, demonstrating improved single‑qubit gate fidelities and progress toward multiqubit feedback. Finally, we explore how quantum error correction overhead can be reduced using low‑density parity‑check codes. We present logical protocols for a non‑local nine‑qubit code, which significantly outperforms comparable surface code implementations in both qubit efficiency and computational capability. These results represent practical steps toward overcoming key challenges in fault‑tolerant quantum computing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ModelDiff: A Framework for Comparing Learning Algorithms</title>
<link href="https://hdl.handle.net/1721.1/163675" rel="alternate"/>
<author>
<name>Shah, Harshay</name>
</author>
<id>https://hdl.handle.net/1721.1/163675</id>
<updated>2025-11-18T06:27:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">ModelDiff: A Framework for Comparing Learning Algorithms
Shah, Harshay
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms. We begin by formalizing this goal as one of finding distinguishing feature transformations, i.e., input transformations that change the predictions of models trained with one learning algorithm but not the other. We then present ModelDiff, a method that leverages the datamodels framework (Ilyas et al., 2022) to compare learning algorithms based on how they use their training data. We demonstrate ModelDiff through three case studies, comparing models trained with/without data augmentation, with/without pre-training, and with different SGD hyperparameters. Our code is available at https://github.com/MadryLab/modeldiff.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sanctuary for Who?</title>
<link href="https://hdl.handle.net/1721.1/163581" rel="alternate"/>
<author>
<name>Salazar, Juan</name>
</author>
<id>https://hdl.handle.net/1721.1/163581</id>
<updated>2025-11-06T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sanctuary for Who?
Salazar, Juan
Philadelphia, often recognized as the poorest major city in the United States, became a Sanctuary City in 2014. The designation committed the region to policies limiting cooperation with federal law enforcement in the persecution of undocumented communities. Policies have ranged from refusing to detain individuals without judicial warrants to prohibiting Immigration and Customs Enforcement (ICE) from accessing municipal databases or facilities for detention purposes. At the community level, the notion of the Sanctuary City sought to promote organizing against unlawful persecution of residents. Over the past eleven years, however, the framework of protection it promised has faltered under mounting federal pressure. The Sanctuary City's symbolic authority and limited scope have failed to shield residents from persecution or restrict ICE's intensifying operations within the area. In 2019, Juntos, the city's foremost immigrant advocacy organization, criticized Philadelphia's Sanctuary status as inadequate. Cited the ongoing persecution of its communities and the declining quality of life for all residents, the organization urged the city to abandon the term "Sanctuary." They instead petition the city to focus instead on meaningfully protecting all residents of Philadelphia, stating, "Let us instead work together to build the kind of city we all want to live in." Junto's critique forms the basis of this thesis, using it as an invitation to reimagine the Sanctuary City as a shift from a policy framework toward a general ethic and design sensibility. This thesis proposes that Philadelphia's crux, like all cities, lies in its ability to sustain communities' pursuit of a dignified life. As a primary agent in the formation of cities, the architect must then make this struggle their own and deploy the tools of their discipline to protect life and inspire dignity. By framing Philadelphia as a city shaped by deindustrialization, disinvestment, and policing, the thesis explores how architecture can respond to these forces by reviving the city's industrial character and establishing new boundaries able to safeguard community rights. Integrating legal, spatial, and semantic insights from federal authorities' rules of engagement will provide novel typologies and programs for the city that address its systemic inequities while fostering environments where life and dignity can flourish. By inscribing meaningful boundaries, and re-equipping the city to make for itself, the thesis suggests architecture becomes a tool for collective protection and urban regeneration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts</title>
<link href="https://hdl.handle.net/1721.1/163580" rel="alternate"/>
<author>
<name>Hirt, Natasha K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163580</id>
<updated>2025-11-06T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Structural analysis at scale: Computational modeling of embodied carbon in complex floor layouts
Hirt, Natasha K.
To meet the needs of growing populations, rates of new construction are increasing at a record pace worldwide. The built environment, already one of the single largest contributors to global CO₂e emissions, will become a significant environmental challenge in the coming decades. To mitigate the anticipated environmental impact of future construction, we need to rethink how we build.&#13;
&#13;
One strategy, which is the subject of this work, is improving the material efficiency of flexural systems like floors. Floors are among the most materially wasteful structural components in buildings, and while decades of research have explored optimal floor system design, the complexity of proposed solutions has limited their practical implementation. Furthermore, the industrial tools available to structural designers do not lend themselves to flexible experimentation or large-scale analysis. As a result, most flexural systems today rely on approximations and rules of thumb rather than mathematically optimal designs, data-driven decision making, or iterative design processes.&#13;
&#13;
This thesis bridges the gap between practical engineering, material efficiency, and design freedom. It presents novel, code-compliant tools for the computational analysis and optimization of flat slabs supported by a network, or grillage, of beams, using a model system of reinforced concrete supported by steel W-sections. The method is used to perform a large-scale analysis of 24,192 unique combinations of beam topologies and assembly design decisions. The results of this analysis find improvements in structural embodied carbon of up to 53.4% over the business-as-usual design case, and also yield generalizable takeaways about the key factors influencing material efficiency in floor slabs. &#13;
&#13;
One of the advantages of the method is its flexibility in taking on a range of complex design challenges. These are presented as extensions to the method, and include designing with a constrained inventory for a series of real-world case studies, and automatically deriving novel structural geometries from dense ground structures.&#13;
&#13;
The method and results shown in this thesis expand the range of analysis tools that engineers have access to, enabling a wide range of creative designs and explicitly linking design decisions to environmental impact.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inscrutability: An Epistemological Experiment</title>
<link href="https://hdl.handle.net/1721.1/163579" rel="alternate"/>
<author>
<name>Huang, Brian Hudson</name>
</author>
<id>https://hdl.handle.net/1721.1/163579</id>
<updated>2025-11-06T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inscrutability: An Epistemological Experiment
Huang, Brian Hudson
Through four different projects, this thesis explores the idea of dimensions of representation, a concept introduced by 20th century French philosopher Michel Foucault in his book The Order of Things. Foucault argues that the Classical episteme, which Foucault defines as the discourse surrounding knowledge-making that lasted from the 17th century to the 19th century, was determined by the idea of dimensions of representations. Dimensions of representations states that during the Classical episteme, knowledge was formulated by representations of the external world, such as through systems of classification, ordering, and relations, rather than through resemblance. The first project, Holes in the Sieve (2023) will address the problematics of classification through a infamous case in the history of paleoanthropology: the Piltdown Man. The second project, Contrapposto in Space (2024) addresses how representation has been instrumentalized in technoscience through space research. Finally, the last two projects, the Poem Box (2024) and Micropoetry (2025) posit a way forward at the limits of representation by engaging with semiotic theory. By engaging with language games, poetry opens up the possibility to deny the position of being knowable, allowing one to disappear into inscrutability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan</title>
<link href="https://hdl.handle.net/1721.1/163578" rel="alternate"/>
<author>
<name>El Haq, Haidar</name>
</author>
<id>https://hdl.handle.net/1721.1/163578</id>
<updated>2025-11-06T03:07:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Koalisi Lahan–Gambut: Assembling Peat–Land Futures in Kalimantan
El Haq, Haidar
Throughout Indonesia’s colonial and postcolonial histories, the peatlands of Kalimantan have been not only politically contested spaces but also sites of ontological struggle. From transmigrasi programs to Suharto’s Mega-Rice Project and most notably today’s carbon offset regimes, peat has been transformed into a paradoxical ecology: degraded yet investible, conserved yet profitable. These transformations enclose land, force communities to choose between extraction or restoration, criminalize fire, and abandon regenerative forms of cultivation. These are histories of ontological occupation institutionalized: the marginalization of both peat’s inhabitants and the soil itself as world-making agents, shaped by speculative regimes of governance, rooted in planetary imaginaries of climate salvation and fantasies of productivity. This thesis proposes Koalisi Lahan–Gambut (Peat–Land Coalition), a speculative parainstitution that explores how coalitional spatial practices might reclaim inhabitation in peat ecologies. Situated in a Ngaju village within the buffer zone of one of the world’s largest carbon offset territories—between deep peat and riverine edges, between restoration enclosures and plantation areas—the coalition works through the murkiness of peat, the heterogeneity of its inhabitants, and the crowded terrain of overlapping institutional claims. It foregrounds the frictions between gambut (peat) and lahan (land). Structured across three inquiries, the document presents a Living Glossary that assembles field terms and relational epistemologies drawn from Kalimantan’s peatlands; a genealogy of Governance, Carbon Fix, and Buffer Zone that traces the historical and institutional processes that rendered peatlands governable; and Landing in the Buffer Zone, which turns to the coalition’s situated experiments in becoming-with, inhabiting, and reclaiming the space between peat and land.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine</title>
<link href="https://hdl.handle.net/1721.1/163576" rel="alternate"/>
<author>
<name>Tamburro, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/163576</id>
<updated>2025-11-06T03:07:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Experimental Study on the Effects of Three-Piece Oil Control Ring Design and Liner Finish on Lubricating Oil Consumption in a Hydrogen-Fueled Single-Cylinder Reciprocating Engine
Tamburro, Alexandra
Reducing lubricating oil consumption (LOC) in reciprocating engines is an increasingly important objective in the pursuit of lower greenhouse gas emissions, longer maintenance intervals, and compliance with tightening environmental regulations. In 2022, the U.S. transportation sector alone was responsible for 29% of national greenhouse gas emissions, 87% of which originated from systems powered by reciprocating engines [1]. While significant progress has been made in fuel efficiency, oil consumption remains as a key contributor to carbon emissions. This research investigates the impact of design parameters in three-piece oil control rings (TPOCRs) and liner surface finish on oil consumption behavior.&#13;
&#13;
Utilizing a hydrogen-fueled engine—where the only source of CO₂ emissions is from consumed lubricating oil—this study develops a high-fidelity, FTIR-based method for direct LOC measurement. A derivation of oil consumption based on air and fuel mass flow rates and measured CO₂ emissions is presented, alongside a sensitivity analysis which identified FTIR measurement uncertainty and ambient CO₂ variation as dominant error sources. All experiments were conducted at 2000 RPM under medium load (4 bar IMEP). The experimental results showed that under the tested condition, 1) increasing liner roughness increases the LOC and 2) changing the orientation of any rails with asymmetrical profile to favor up-scraping results in an elevation of LOC.  Analyses applying liner vaporization and TPOCR models showed that the changes in liner oil film thickness brought by the TPOCR changes have negligible effect on the LOC from the oil evaporation.  Increases in upper-rail up-scraping ability and the oil accumulation inside the TPOCR groove can both elevate the LOC although further investigation is needed to understand the oil transport paths leading to the LOC.&#13;
&#13;
This work provides a foundation for future optimization of TPOCR design by highlighting key ring-liner interactions and oil transport mechanisms. Further study of asymmetric geometries and surface characteristics will provide further insights for reducing oil consumption in engine platforms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape</title>
<link href="https://hdl.handle.net/1721.1/163575" rel="alternate"/>
<author>
<name>Bhupathi, Hari Raghavendran</name>
</author>
<id>https://hdl.handle.net/1721.1/163575</id>
<updated>2025-11-06T03:07:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design of Future Energy Infrastructure: Understanding trade-offs between Renewable Capacity, Storage and Transmission Networks for Low-Carbon Landscape
Bhupathi, Hari Raghavendran
In 2021, the United States committed to achieving net-zero greenhouse gas emissions by 2050, requiring a fundamental transformation of its energy infrastructure. This thesis develops a nationwide optimization model to minimize capital expenditures and understand the trade-off between renewable capacity, storage, and transmission networks. The results show that the least-cost configuration, achieved when nuclear and battery capital costs fall by 50%, requires approximately $3.25 trillion in new investment - a 37% reduction relative to the baseline scenario. Comparative scenario analysis reveals a marked shift toward centralized storage when nuclear costs decline, which improves reliability and reduces contingency requirements - mirroring inventory pooling dynamics in supply chains. Concurrently, wind capacity additions fall sharply, with each 10% reduction in nuclear cost halving the predicted wind capacity addition. Transmission infrastructure evolves accordingly: 765 kV lines decline as nuclear becomes more decentralized, while 230 kV lines expand modestly to manage increased intermittency. By&#13;
quantifying trade-offs across technologies and identifying system tipping points, this work offers a framework for policymakers and long-horizon investors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163574" rel="alternate"/>
<author>
<name>Shah, Sharmi</name>
</author>
<id>https://hdl.handle.net/1721.1/163574</id>
<updated>2025-11-06T03:06:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Barometer-Based Tactile Sensing: Characterization,&#13;
Processing, and Applications for Dynamic Manipulation
Shah, Sharmi
Reliable tactile feedback is essential for robotic systems to interact effectively with their environments, especially in dynamic manipulation tasks where detecting contact onset, direction, and force is critical for control and planning. This thesis advances the development of barometer-based tactile sensors for low-force interactions, building upon prior work from the Biomimetic Robotics Lab. Previous work demonstrated that neural networks could infer contact location and three-axis contact force from barometers embedded within an elastomer. However, these models did not account for the viscoelastic behavior of the elastomer, which degrades sensor repeatability and bandwidth. To address these limitations, this thesis introduces a recurrent neural network (RNN) architecture that captures viscoelastic transients in the sensor response. The proposed methods are evaluated on two sensor geometries: a spherical sensor and a slimmer ellipsoid variant. An automated data collection pipeline is developed to generate temporally-continuous, uniformly sampled datasets across the sensor surface. RNN models trained on this data show that temporal modeling improves force prediction accuracy across both designs. To improve angle prediction accuracy, a binning strategy is used to enforce a uniform prior over contact orientations. The resulting "Binned RNN" neural networks are small-scale and demonstrate high sensitivity, enabling responsive tactile feedback. The utility of these tactile sensors is demonstrated by integrating the sensors onto a dexterous two-finger gripper and performing light grasping and estimation of object reorientation using solely tactile measurements. This work shows that accounting for viscoelastic effects through informed sampling and temporal modeling enhances the practical performance of elastomer-based tactile sensors in robotic systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can diffusion models capture extreme event statistics?</title>
<link href="https://hdl.handle.net/1721.1/163573" rel="alternate"/>
<author>
<name>Stamatelopoulos, Stamatios</name>
</author>
<id>https://hdl.handle.net/1721.1/163573</id>
<updated>2025-11-06T03:07:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Can diffusion models capture extreme event statistics?
Stamatelopoulos, Stamatios
For many important problems it is essential to be able to accurately quantify the statistics of extremes for specific quantities of interest, such as extreme atmospheric weather events or ocean-related quantities. While there are many classical approaches to perform such modeling tasks, recent interest has been increasing in the usage of generative models trained on available data. Despite the sporadic success of such methods, it is not clear for what systems or datasets a system-agnostic generative AI tool is capable of generating previously ‘unseen’ extreme events in a manner that accurately extrapolates the tails for the observable of interest. Here, we propose an apriori criterion, which based on the geometry of the training dataset, it can predict whether a generative AI tool will be able to extrapolate the tails, i.e. generate previously unseen extreme events. The idea is to quantify whether existing extreme events lie in the interior of the dataset or its boundary. In the former case it is shown that generative AI tools can work in an ‘interpolation’ mode and generate new extreme events. On the other hand, if the topology of the dataset is such that extremes live in the boundary of the domain then the generative AI algorithm needs to operate in an extrapolation mode which does not lead to accurate results. We illustrate our findings on a specific class of Diffusion Models (DMs) called Denoising Diffusion Probabilistic Models (DDPMs) and we test on three datasets, a simple on-hyperball dataset following a Weibull distribution for the radii of the data points of dimensionality 2 • 10³, a dataset sampled from the so-called Majda-McLaughlin-Tabak Wave Model (MMT), of dimensionality 8.1 • 10³ and a dataset consisting of Lagrangian turbulence trajectories, of dimensionality 2 • 10³.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings</title>
<link href="https://hdl.handle.net/1721.1/163572" rel="alternate"/>
<author>
<name>Ajienka, Soala Lolia</name>
</author>
<id>https://hdl.handle.net/1721.1/163572</id>
<updated>2025-11-06T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Face Me, I face you: Towards an Indigenous Economy of Glass in Southern Nigerian Dwellings
Ajienka, Soala Lolia
This thesis proposes the weaving together of two lost traditions - the practice of primary glassmaking in southern Nigeria and the U-shaped bungalow typology of multi-family housing - as a means to address both the qualitative and quantitative housing deficits in Port Harcourt and to support the broader requisites of macroeconomic productivity in Nigeria. The thesis frames the argument that the materiality and application of glass can reconnect the inhabitation and construction of Face Me, I Face You (FMIFY) housing to Nigerian history, culture, and identity. By charting a blueprint for localized material production and engaging questions of affordability, cost structure, and financing, this work positions design as a technical solution and an act of cultural authorship. As an architect, builder, and member of the community, I advocate for a new practice in which the bond between local craftsmanship and housing development is re-established - through material choices, construction systems, economic benchmarking and spatial design strategies. This body of work braids together three interconnected narratives: First, it traces the historical evolution of the U shaped bungalow typology, revealing its roots as a colonial adaptation of the rural compound house, the economic conditions that have led to its physical obsolescence yet sustained market relevance and examining how its cultural significance was gradually diluted through climate-insensitive design and the introduction of imported materials. Second, this body of work rediscovers Nigeria’s precolonial glassmaking traditions, with a focus on artisanal production methods that offer environmental efficiency, energy intelligence, and deep cultural resonance - qualities in stark contrast to the high-energy, standardized imported glass that dominates today’s housing. Third, it integrates these two recoveries through built interventions: redesigning roof structures to support artisanal glass rondels, optimizing daylighting, ventilation, and thermal comfort, and reorganizing courtyards to revive their role as culturally vibrant, socially essential spaces. By leveraging indigenous glassmaking practices and small-batch production models, this thesis advocates for the creation of a circular economy, generating local employment, reducing embodied energy, and restoring cultural resilience - while delivering environmentally sensitive and economically viable housing solutions that demonstrate comparable return on costs for their owners. Foregrounding opacity as a design value, the project seeks to balance communal life with cultural and spatial notions of privacy, challenging the hegemony of imported transparency. Through the strategic curation of apertures, the careful modulation of light and shadow, and the integration of locally crafted glass rondels, the thesis re-envisions the Face Me I Face You typology. Ultimately, this work positions artisanal glass not only as a building material, but as a medium for recalibrating housing production in southern Nigeria toward systemic resilience and self-determination.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163571" rel="alternate"/>
<author>
<name>Ulloa, Gabriella E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163571</id>
<updated>2025-11-06T03:06:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DexWrist: A Robotic Wrist for Constrained and Dynamic Manipulation
Ulloa, Gabriella E.
DexWrist is a compliant robotic wrist designed to advance robotic manipulation in highly-constrained environments, enable dynamic tasks, and speed up data collection. DexWrist is designed to be close to the functional capabilities of the human wrist and achieves mechanical compliance and a greater workspace as compared to existing robotic wrist designs. The DexWrist can supercharge policy learning by (i) enabling faster teleoperation and therefore making data collection more scalable; (ii) completing tasks in fewer steps which reduces trajectory lengths and therefore can ease policy learning; (iii) DexWrist is designed to be torque transparent with easily simulateable kinematics for simulated data collection; and most importantly (iv) expands the workspace of manipulation for approaching highly cluttered scenes and tasks. More details about the wrist can be found at: https://sites.google.com/view/dexwrist/home.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Labor: Sensable Instructions through Digital Jigs</title>
<link href="https://hdl.handle.net/1721.1/163570" rel="alternate"/>
<author>
<name>Griffin, Danny</name>
</author>
<id>https://hdl.handle.net/1721.1/163570</id>
<updated>2025-11-06T03:07:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Guiding Labor: Sensable Instructions through Digital Jigs
Griffin, Danny
Contemporary architects find themselves at a juncture, navigating the transition from traditional modes of instruction to an asymmetrical integration of digital technologies. Drawings remain central to architectural practice, yet a widening gap persists between tools for making drawings and tools for interpreting them. Since Alberti’s division between intellectual and productive labor, architectural instructions have been generated in remote offices and executed on distant construction sites. Digital tools have expanded the information density of drawings, yet the process of interpretation remains predominantly analog. Graphical conventions, though precise, are abstract, and so paper instructions alone lack spatial meaning. Builders ultimately rely on the aid of analog locating techniques to translate these abstractions into actions. Tools as simple as strings and squares have long been present on construction sites, enabling this translation. Over time, the shape and function of such devices has evolved in response to different pressures of location, from the Gothic template which left room for the builder to improvise, to the industrial jig that constrained movement to ensure replicability. The limitations of analog locating became clear when the plumb bob, long trusted to mark which direction was vertical, proved inadequate for navigating trajectories of flying objects. The solution was to embed physical devices with memory, marking a transition from tools which measure where they are to those that know where they are going. This shift from stateless to stateful devices gradually entered construction sites, and though we might distrust the devices that make possible the steering of missiles, this paradigm shift offers a productive challenge to the field of architecture. If simplifying complex construction is worthwhile, then communication pathways which more faithfully transfer information from digital model to physical destination must be explored. Central to this transformation are the tools which anchor instructions on site: interfaces already mediating between architect and builder, which must now evolve to interpret digital signals from afar. Digital jigs will be the conduits of paperless instruction on physical sites, enabling what this thesis terms sensable instructions: instructions receivable by both machines and humans.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface</title>
<link href="https://hdl.handle.net/1721.1/163569" rel="alternate"/>
<author>
<name>Bei, Yining</name>
</author>
<id>https://hdl.handle.net/1721.1/163569</id>
<updated>2025-11-06T03:06:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Natural Interaction: 3D Modeling in Wearable VR Using a Gesture and Speech Interface
Bei, Yining
Designers often rely on keyboard and mouse for 3D modeling, a method that can feel unintuitive or restrictive—especially in collaborative or spatially immersive settings. This thesis explores how multimodal interaction, specifically the combination of hand gestures and voice commands, can support more natural, efficient, and accessible 3D modeling in virtual reality (VR). Built on a custom Unity-based system integrating Meta Quest hand tracking and Wit.ai voice recognition, the study investigates how these two input modes—gesture and speech—can be used together to manipulate and modify 3D geometry in real time. The research proceeds in three phases: (1) a formative study analyzing how users intuitively deploy gestures, revealing common preferences, task breakdown strategies, and limitations in gesture inputs; (2) system design and implementation of both gesture-only and gesture + speech interfaces for navigation and object manipulation (e.g., translation, scaling, duplication); and (3) a comparative user study evaluating gesture-only, gesture + speech, and keyboard + mouse workflows in terms of learning curve, task efficiency, and user satisfaction. Results show that gesture + speech enables smoother transitions across modeling subtasks and allows users to offload certain parameters (e.g., numeric values, distances) to voice while using gestures for spatial control. Participants reported higher engagement and lower cognitive load compared to keyboard-based workflows, especially in tasks involving spatial scale and collaboration. This thesis demonstrates the feasibility and design potential of multimodal interaction for immersive modeling workflows and offers insights for future XR design tools that seek to blend precision with embodied interaction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices</title>
<link href="https://hdl.handle.net/1721.1/163566" rel="alternate"/>
<author>
<name>Stamler, Natasha Lia</name>
</author>
<id>https://hdl.handle.net/1721.1/163566</id>
<updated>2025-11-06T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Understanding the Limits of Coupled Condensation and Desorption in Sorption-Based Atmospheric Water Harvesting (SAWH) Devices
Stamler, Natasha Lia
Access to clean water is a serious challenge around the world, with almost 2/3 of the global population experiencing water scarcity at some point during the year, especially in dry regions. One solution to this problem is sorbent-based atmospheric water harvesting (SAWH) due to its ability to produce drinking water in a range of environments, including at low humidity. SAWH device operation is composed of adsorption and desorption phases. During adsorption, moist air flows into the device and is adsorbed onto the sorbent bed. This is followed by the desorption phase during which the sorbent is heated to desorb the water as vapor, which is then transported to a colder condenser surface on which it is condensed as liquid water. Finally, the condensed water can be collected outside the device. However, current state-of-the-art SAWH devices are inefficient, with less than 70% of their adsorbed water being collected. This means the adsorbed water is either not condensed or condensed but not collected. This work discusses the impact of the coupling between desorption and condensation on the efficiency of SAWH devices. In general, SAWH systems can suffer from three scenarios of inefficient desorption-condensation: flux-limited, when the desorption rate in the device is insufficient to fully utilize the condenser’s condensation capacity; transport-limited, when the time scale of the vapor transport from the sorbent bed to the condenser is slow compared to the desorption operation time; and condenser-limited, when the condenser has a poor thermal design compared to the vapor flux. We developed a system-level model of a SAWH device to inform design strategies to mitigate these three bottlenecks and optimize device performance. Additionally, we quantified hydrocarbons, common airborne contaminants, as a mechanism for slowing water collection. Experimental findings are used to develop a model for the impact of airborne hydrocarbon adsorption on surface wettability and water retention for six metals commonly used as condenser materials. The findings from these models can inform design recommendations for SAWH devices as well as various other industrial applications in which water condenses on metal surfaces such as refrigeration and power generation. Future work will focus on continued experimental validation of the models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior</title>
<link href="https://hdl.handle.net/1721.1/163565" rel="alternate"/>
<author>
<name>Rodriguez, Camille Dyani</name>
</author>
<id>https://hdl.handle.net/1721.1/163565</id>
<updated>2025-11-06T03:06:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact of Vimentin Intermediate Filaments on 3D Multicellular Collective Behavior
Rodriguez, Camille Dyani
Vimentin, a type III intermediate filament, is an understudied component of the cytoskeletal system. However, in recent studies we can see its structural and mechanical properties aid in a cell's survival and migration. It forms a hyperelastic network and works synergistically with actin and microtubule to protect against large deformations.  Despite vimentin intermediate filaments critical role in many biological processes, there are limited studies on its role in collective migration in 3D in vitro. To elucidate vimentin’s role in a collective cell cluster, single MCF-7 cells are embedded in a Matrigel-Alginate gel, which then grow into multicellular systems. The MCF-7 cells utilized are vimentin null, chemically inducible to form vimentin networks that interact with the other components of the cytoskeleton. These MCF-7 allow for controlled expression of mature vimentin intermediate filament (VIFs) which then form networks. We study these multicellular clusters over the course of 14 days. We demonstrate that there are key differences in morphology and mechanics, with the presence of vimentin. Our results suggests VIFs create more irregular cell clusters with more visible dynamic interplay with the environment. Uninduced (no VIFs) clusters were overall less dynamic and exhibited spherical morphology and minimal protrusions. Cluster with mature VIFs tended to form more elongated multicellular clusters with increased number of projections into the surrounding gel. In these induced (with VIFs) clusters these projections are shown to be constantly protruding and retracting along with the nuclei continually reorganizing. Our results show that these projections are accompanied with increased protrusive and contractile gel displacements. These results indicate that vimentin network generate an dynamic and functional morphology, along with mechanically perturbing their environment in the early stages of cell cluster collective behavior.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for Dynamic Nonprehensile Object Transport</title>
<link href="https://hdl.handle.net/1721.1/163564" rel="alternate"/>
<author>
<name>Wang, Eric K.</name>
</author>
<id>https://hdl.handle.net/1721.1/163564</id>
<updated>2025-11-06T03:06:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Planning for Dynamic Nonprehensile Object Transport
Wang, Eric K.
Generalized planning methods for dynamic manipulation struggle to efficiently solve kinodynamic constraints. Gradient-based methods suffer from initialization sensitivity, local optimum convergence, and lack of feasibility guarantees, while sampling-based methods can require large computation times if there exist challenging boundary conditions. Iterative Time Optimal Path Parameterization, or iTOPP, guarantees a feasible local minimum for a dynamic grasping problem by iteratively decreasing transit time for a trajectory initially generated to satisfy kinodynamic contact constraints. We demonstrate solutions that can handle initial or final goal states defined as quasistatically infeasible, in which purely quasistatic motions cannot generate a warm start trajectory. We also design an indirect adaptive controller that can track a desired dynamic grasping trajectory assuming unknown object mass and location parameters.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites</title>
<link href="https://hdl.handle.net/1721.1/163563" rel="alternate"/>
<author>
<name>Webb, Alisa Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/163563</id>
<updated>2025-11-06T03:08:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Fundamental Behavior of Nanoporous Networks in the Out-of-Autoclave Manufacturing of Carbon Fiber Reinforced Polymer Composites
Webb, Alisa Nicole
Throughout the aerospace industry, carbon fiber reinforced polymer (CFRP) laminated composites are used extensively in spacecraft and aircraft vehicles due to their high specific strength and stiffness and other properties. Processing these advanced structural CFRP composites, especially in prepreg form, is often completed via autoclaves where elevated temperatures and pressures of typically 180 ◦C (350 ◦F) and 0.7 MPa (7 bars), respectively, are applied to cure the polymer matrix and compress the constituent laminae together. However, autoclaves are energy intensive, expensive, and impose geometrical constraints on components due to thermal gradients within the chamber. Thus, there exists a need to find alternative manufacturing techniques. Throughout this thesis, an alternative method to autoclave processing is presented using vacuum-bag only (VBO) techniques with nanoporous networks (NPNs) in the interlaminar regions in autoclave-required epoxy prepreg CFRP composites. Nanoporous materials are defined as materials containing pores in the mid nanometer to low micrometer range. Once placed in the interlaminar region of the laminate, voids are reduced by the induced capillary pressures of the NPNs, and trapped gas evacuates through the NPN. By utilizing capillary flow porometry, capillary pressure and through-thickness permeability are quantified for various NPNs, along with other porous materials. Capillary pressure and permeability exhibit an inversely proportional relationship for all tested materials with CNT-based and polymer aerogel NPNs providing capillary pressures higher than an autoclave pressure of 0.7 MPa. Accordingly, an Ashby-type plot is presented as an aid for NPN selection for composites manufacturing. Previous studies of unidirectional glass fiber reinforced polymer (GFRP) composites and unidirectional CFRP composites show success with NPN-enabled VBO-manufacturing using aligned carbon nanotubes (A-CNTs) and electrospun polymer nanofiber (EPN) mats. However, success with woven prepreg has not been consistently achieved before this thesis. Autoclave woven epoxy CFRP laminates of IM7/8552 are manufactured using EPN and polymer aerogel NPNs with a VBO procedure. Once manufactured, these laminates were characterized for quality through void content analysis. 0.11 void vol% was achieved which is well within the 1 vol% of void requirement for aerospace-grade composite components. To aid the in the understanding of NPNs, in situ experiments utilizing microcomputed tomography are developed to investigate the (presumed Newtonian) flow of resin throughout the NPN as a function of temperature, which varies throughout a typical manufacturer recommended cure cycle (MRCC), along with the void evolution throughout the cure cycle. Based on this new in situ understanding, a manufacturing process modification is devised to produce void-free woven laminates at the 152.4 mm laminate scale. Through manufacturing, material characterization, and designed in situ experiments, this thesis demonstrates the use of NPNs for VBO-manufacturing of low-void content aerospace-grade CFRP composites to replace autoclaves for energy and cost savings.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/163562" rel="alternate"/>
<author>
<name>Gao, Jin</name>
</author>
<id>https://hdl.handle.net/1721.1/163562</id>
<updated>2025-11-06T03:08:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mediators: Participatory Collective Intelligence for Multi-Stakeholder Urban Decision-Making
Gao, Jin
Cities are dynamic and evolving organisms shaped through the check-and-balance of interest exchange. As cities gain complexity and more stakeholders become involved in decision-making, reaching consensus becomes the core challenge and the essence of the urbanism process. This thesis introduces a computational framework for AI-augmented collective decision-making in urban settings. Based on real-world case studies, the core decision-making process is abstracted as a multiplayer board game modeling the check-and-balance dynamics among stakeholders with differing values. Players are encouraged to balance short-term interests and long-term resilience, and evaluate the risks and benefits of collaboration. The system is implemented as a physical interactive play-table with digital interfaces, enabling two use cases: simulating potential outcomes via AI self-play, and human–agent co-play via human-inthe-loop interactions. Technically, the framework integrates multi-agent reinforcement learning (MARL) for agent strategy training, multi-agent large language model (LLM) discussions to enable natural language negotiation, and retrieval-augmented generation (RAG) to ground decisions in contextual knowledge. Together, these components form a full-stack pipeline for simulating collective decision-making enriched by human participation. This research offers a novel participatory tool for planners, policymakers, architects, and the public to examine how differing values shape development trajectories. It also demonstrates an integrated approach to collective intelligence, combining numerical optimization, language-based reasoning, and human participation, to explore how AI–AI and AI–human collaboration can emerge within complex multi-stakeholder environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World</title>
<link href="https://hdl.handle.net/1721.1/163560" rel="alternate"/>
<author>
<name>Apostolopoulou, Katerina</name>
</author>
<id>https://hdl.handle.net/1721.1/163560</id>
<updated>2025-11-06T03:08:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Scar To Scaffold: The Afterlife of the Oil Pipeline for a Decarbonizing World
Apostolopoulou, Katerina
With over 86,000 kilometers of crude oil pipelines—and more than 2.13 million kilometers of total oil and gas pipelines in the United States as of 2024—many segments are already corroded and aging, deeply embedded within urban and ecological systems that are increasingly endangered. As the global energy transition accelerates, this thesis investigates the future of these infrastructures, reconsidering the vast network of decommissioned and declining legacy pipelines not as obsolete relics, but as latent spatial assets for ecological repair, climate resilience, and socio-environmental justice. Moving beyond narratives of extraction and decay, the project repositions pipelines as linear territories of opportunity—capable of being retrofitted into new civic, ecological, and infrastructural frameworks. Central to the project is the transformation of the pipeline’s linear, extractive logic into a circular and connective one: a loop that is both finite and infinite, territorial and experiential. Focusing on a strategically selected loop of crude oil pipelines spanning 14 states, the thesis constructs a cartographic and architectural framework to reimagine these lines as sites of ecological repair, social infrastructure, and alternative energy distribution—where design, much like a biological scaffold, acts as a catalyst for regeneration along landscapes shaped by extraction. Through spatial analysis, typological classification, and mapping, five territorial conditions are defined along the pipeline loop, each offering distinct opportunities for intervention. These are tested through speculative design prototypes that transform the pipeline through operations of repurpose, renewable energy distribution, or ecological remediation. The interventions reframe invasive infrastructures into public and environmental assets—generating new spaces for inhabitation, production, and collective memory. Ultimately, the thesis proposes a post-carbon design paradigm rooted in ecological reciprocity, collective agency, and infrastructural care—revealing hidden energy landscapes and inscribing them with new values: resilience, equity, and repair.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces</title>
<link href="https://hdl.handle.net/1721.1/163559" rel="alternate"/>
<author>
<name>Salmon, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/163559</id>
<updated>2025-11-06T03:08:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Control and Aerodynamic Design of a Solar Road Vehicle with Articulated Surfaces
Salmon, Jason
The automobile industry is critical to modern society. Simultaneously, the constant release of toxic emissions such as greenhouse gases into the atmosphere is detrimental to health and the environment. Vehicles which exploit cleaner energy sources would be preferable to reduce the horrific scale of human-initiated damage such as climate change. However, solar road vehicles—though designed and fabricated by some—have not reached a sufficient level to be production-worthy. The low efficiency of solar cells and the high energy demands of the average land vehicle are irreconcilable for most manufacturers using industry methods and design precedent. Therefore, this work centres around the design and control of a solar road vehicle which fundamentally breaks from the mould of the typical road vehicle design—a vehicle which employs extensive articulated surfaces (dubbed "solar wings") which can be angled to directly face the sun, thereby maximising solar irradiation. A solar tracker using Bayesian inference achieving promising results in both convergence and accuracy is presented. Additionally, a systematic method for optimizing a solar road vehicle with solar wings is developed and documented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/163557" rel="alternate"/>
<author>
<name>Romero, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/163557</id>
<updated>2025-11-06T03:07:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Commercialization Strategy of a Gantry-Based Automation Platform for High-Throughput Raman Spectroscopy
Romero, Catalina
Raman spectroscopy is a powerful optical technique that enables rapid, label-free molecular analysis. This offers significant potential to be used across pharmaceutical development, microbiome research, and food diagnostics. However, the utility of Raman spectroscopy in high-throughput applications has been limited by the lack of cost-effective, modular automation platforms capable of handling large volumes of samples with precision and repeatability. Conventional Raman workflows are constrained by manual sample handling, slow throughput, and high user variability, limiting their applicability in high-volume testing environments. To address these challenges, this thesis presents the development and initial validation of a custom two-axis (XY) gantry and a robotic well plate stacker automation platform designed to streamline the sample handling workflow in Raman spectroscopy systems, facilitating high-throughput, precise, and reproducible positioning of microplate samples under a Raman microscope. This thesis also provides a commercialization framework for the system as a standalone automation product, targeting pharmaceutical high-throughput screening, microbiome analysis, and food safety testing. The platform serves the unmet needs in these industries, where labor-intensive and inconsistent sample positioning limits scalability. The commercialization analysis includes an evaluation of market sizing, competitive benchmarking, pricing models, and go-to-market strategies. The modular platform has the potential to enable broader adoption of Raman-based analysis tools by reducing labor intensity and improving repeatability in sample positioning workflows. This work lays the foundation for the future integration of optical feedback and automated analysis, with the goal of transforming how Raman-based diagnostics are conducted at scale.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors</title>
<link href="https://hdl.handle.net/1721.1/163555" rel="alternate"/>
<author>
<name>Spino III, Pascal</name>
</author>
<id>https://hdl.handle.net/1721.1/163555</id>
<updated>2025-11-06T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Intelligence Through Interaction: Leveraging Physical Interactions In Simple Robots To Produce Complex Behaviors
Spino III, Pascal
This thesis investigates how intelligent robot behavior can emerge from physical interactions rather than sensing, computation, and actuation in the traditional sense. Two robotic systems are presented to explore this concept in different domains. The first is a swarm of simple rolling robots whose collective morphology is shaped by distributed control laws and magnetic interactions, enabling decentralized construction-like behaviors such as bridge formation. The second is a soft underwater robot inspired by anguilliform swimming, which achieves efficient locomotion through a single actuator that leverages fluid–structure interactions in a compliant silicone tail. Useful behavior arises in both systems from the physical design and the dynamics of environmental interaction, rather than from algorithmic or computational complexity. These results demonstrate that physical intelligence can serve as a powerful design principle for building adaptive, robust, and minimal robotic systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hidden Monuments</title>
<link href="https://hdl.handle.net/1721.1/163553" rel="alternate"/>
<author>
<name>Lee, Sesil</name>
</author>
<id>https://hdl.handle.net/1721.1/163553</id>
<updated>2025-11-06T03:08:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hidden Monuments
Lee, Sesil
Jeju Island’s burial culture is embedded in the island’s distinct landscape, where sandam burial mounds are not isolated monuments but quietly coexist with fields, ranches, and forests. These sites are living records of intangible heritage—ancestral beliefs, Beolcho rituals, and vernacular stone-stacking practices—manifested not through formalized memory, but through their modest yet persistent presence in the landscape. Today, however, these spaces are under threat: policies favoring cremation, rapid urbanization, and shifting land values render them increasingly invisible or obsolete. In the past few decades, two-thirds of sandam have been displaced, and with fewer than six out of over 100,000 burial sites designated as cultural heritage, traditional models of conservation are inadequate—unable to engage with the dispersed, landscape-bound nature of these burial grounds. This project reimagines Jeju’s burial mounds not as relics to be preserved, but as spatial anchors for cultural and communal expressions. Through a series of small-scale architectural interventions—gates, stages, passages, and shelters—deployed along paths tracing sandam clusters, the work explores how memory can be practiced rather than displayed. By offering ways to engage with the buried, the forgotten, and the living simultaneously, the project expands the idea of heritage: not as a static record, but as a participatory and evolving relationship between people, land, and memory.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems</title>
<link href="https://hdl.handle.net/1721.1/163551" rel="alternate"/>
<author>
<name>Wucherer, Abigail</name>
</author>
<id>https://hdl.handle.net/1721.1/163551</id>
<updated>2025-11-06T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Mechanical and Electrical Interfaces for Rapid Swap Battery Systems
Wucherer, Abigail
In the drive towards a globally decarbonized energy economy, rapid swap battery packs provide a potential means to improve electric vehicle adoption in high utilization industrial vehicles where lengthy charge times are a barrier to electrification. High voltage, high current battery connectors are a critical component for coupling the pack to the electric vehicle, distributing power from the battery to the drivetrain. Most state-of-the-art connections require precision alignment of contact surfaces, and bolted preload or retention mechanisms, hindering the implementation of rapid swap battery systems. The need for robust, high life cycle, high-power contacts motivates a new approach to connector design. The integration of electrical connectors with the battery mount’s structural loop creates a new design space where preload, geometry, and contact resistance may be optimized. This co-design approach enables mechanical and electrical functional requirements to be considered in conjunction to ensure reliable fulfillment in both areas while reducing the time for battery pack swaps. This work introduces two distinct approaches for aligning the pack to the vehicle, locking the battery in place, and engaging electrical contact with geometry unique to the system design. These approaches offer higher reliability, mechanical and electrical longevity, and automatic alignment capabilities during loading of the battery pack. Across both designs, the contact resistance is the primary metric for evaluating the electrical performance, and the contact pressure is used to evaluate the risk of mechanical wear. The first approach integrates a quasi-kinematic coupling-based connector with integrated electrical contacts, allowing for repeatable and accurate positioning of the battery pack to the vehicle. A slotted ball and socket design approach is considered to accommodate for angular misalignment and establish repeatable contact area through elastic averaging. The second approach proposes a planar contact to further reduce the contact pressure and increase contact longevity without the need for expensive and rare hardened coatings. This system relies on a rail and flat system for guiding the battery pack into a locked position and engaging the planar contacts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)</title>
<link href="https://hdl.handle.net/1721.1/163550" rel="alternate"/>
<author>
<name>Hakemy, Arezo</name>
</author>
<id>https://hdl.handle.net/1721.1/163550</id>
<updated>2025-11-06T03:08:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Weaving Borders, Mapping Place: Afghan War Rugs of the Soviet-Afghan War (1979-1989)
Hakemy, Arezo
Early Afghan war rugs delineate place through their pictorial design, embedding spatial memory into the tactile surface of the woven field. Emerging in the wake of the Soviet invasion in the late 1970s, these rugs integrate modern war iconography of tanks, helicopters, and maps into a medium historically tied to regional identity, spiritual practice, and craft. While earlier scholarship has often read these rugs as commodities of war tourism, this thesis moves beyond this interpretation to foreground the rug as a placemaking device, one that asserts territory and agency through mapping techniques. Afghan war rugs frame and define space on a land that has largely been considered placeless, at times porous and seemingly unknown. Through their borders, these rugs resist the geopolitical narratives that have long reduced Afghanistan to a war-torn frontier. The border serves as a framing device, structuring the rug’s design while simultaneously asserting territorial presence. Whether following a prescribed cartoon or improvising patterns, the weaver actively engages in “border-ing,” exercising cartographic agency by embedding personal, traditional, and political motifs into the rug. This research interrogates how early Afghan war rugs engage in spatial representation against the backdrop of the Soviet-Afghan war from 1979-1989. From historical colonial mapping projects to Soviet and American cartographic investigations, Afghanistan’s borders have long been sites of surveillance, resource extraction, and imperial ambition. Yet, in contrast to these external mapping practices, the war rug’s design is a resistant act of placemaking. Examining the rug as both artifact and map, this study explores how Afghan weavers reclaim their landscapes through rug making, embedding memory and materiality into woven form.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes</title>
<link href="https://hdl.handle.net/1721.1/163547" rel="alternate"/>
<author>
<name>Bondarenko, Lina</name>
</author>
<id>https://hdl.handle.net/1721.1/163547</id>
<updated>2025-11-06T03:07:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Social Sensory Somatic Scores for Species, Spaces, Soils, and&#13;
Structures of Steep Slopes
Bondarenko, Lina
Modern knowledge systems have physically and conceptually “flattened” the world, erasing the ecological, political, and sensory complexities inherent to sloped terrain. By attending closely to the slope—as both a material condition and a generative metaphor—this thesis foregrounds movement as a form of resistance to regimes of exploitation, abstraction, and estrangement that have historically transformed land into data and place into property. Weaving together interdisciplinary methodologies from performance studies, landscape architecture theory, feminist geography, ecological theology, environmental history, sensory ethnography, and media studies, SSSSSSSSSS dances an inclined methodological structure, oscillating deliberately between critical systemic analysis and situated sensory experience. Ch1. sets the stage among steep slopes and introduces the discipline to movement as pedagogy, enacting the urgency for new methodologies into schemes of the project’s medium and the book’s format. Ch.2 is a feminist investigation of the ways modern infrastructures and spaces have been designed to reinforce land abstraction and commodification in the name of improvement-- severing embodied relationality, contributing to societal apathy toward ecological and social crises. Imperial post-enlightenment statecraft, the suppression of wildness, and the standardization of level form have flattened our upright movements to enact a state of senslessness. Contradicting Ch.2’s straight critique, Ch.3 attempts to reweave the sinuous nuance of symbiogenesis between soils and species, revealing that humans are but one among many sloped organisms moving, and inclining, and co-evolving as the lithosphere; we have been slorgs all along. Slorgs belong to divine mythologies of terrain’s elevations and have reciprocated in admiration, mimicking topographic spatial functions and adorning the summits with artistic interventions--some inadvertently contributing to the damaging regimes of Ch.2. Interwoven through both chapters, outliers resisting those forces of governance and exploitation are often those displaced by them-- those moving in ways the system polices and erases from comprehension-- refugees, queers, witches, tricksters, artists, herbalists, and healers. The intended medium of SSSSSSSSSS coalesces in Ch.4: inviting the general public to participatory happenings with hills, composing scores, coaxing their inner slorgs to slither askew, sloping themselves as moving loci for sympoietic becoming. Multi-species attune to a social, sensed, somatic experience, co-composing spatial relations among local steep soils. Slorgs challenge the abstractions of dominant epistemologies in the temporal, situated act of trusting their own proprioception in collective balance, affirming the multidimensional value of embodied, ecological geo-choreography. Social Sensory Somatic Scores for Soils, Structures, Spaces, and Species of Steep Slopes are presented through photographs in Ch.4 and in moving image, available as supplemental material.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision</title>
<link href="https://hdl.handle.net/1721.1/163544" rel="alternate"/>
<author>
<name>Klimenko, Nikita</name>
</author>
<id>https://hdl.handle.net/1721.1/163544</id>
<updated>2025-11-06T03:07:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Cooling Machines:&#13;
Exploring the Heat Mitigation Effect of Urban Trees with Computer Vision
Klimenko, Nikita
As the impacts of climate change on cities become more pronounced, urban authorities are under pressure to prepare existing streetscapes for increased levels of heat stress. While many aspects of existing urban morphology have an impact on heat exposure (e.g. sky view factor, glazing levels, facade materials), they cannot be rapidly changed at large across existing urban infrastructures. Urban authorities across the world increasingly turn to planting trees as a way of cooling urban streetscapes. Urban vegetation is indeed known to have a cooling effect, primarily due to trees providing shade and preventing urban materials from heating up, as well as due to their ability to maintain their own internal temperature due to evapotranspiration. Even though the positive impacts of urban trees on thermal comfort are long known and well-studied, little work is dedicated to how these impacts vary across trees of different species and morphology. This is due to both the complexity of studying vegetation life cycles at sufficient scale, as well as due to the dispersed nature of the issue across disciplines of biology, urban climate, design, and data science. Nevertheless, this specific knowledge is vital to urban planners for deciding which trees have the most cooling effect in specific parts of the city. This thesis embraces the notion of trees as ‘cooling machines’ and dissects the diverse morphological and contextual factors that affect the role of individual trees on local urban heatscape. Leveraging a set of computer vision methodologies, including species recognition, context-aware segmentation, and photogrammetry, the thesis examines a large dataset of thermal imagery of urban trees collected in Los Angeles and Dubai to describe the impact of individual tree species, height and form, as well as spatial context on the cooling effect. Building on this approach, the thesis proposes a prototyping framework for architects to cure urban heatscapes via targeted curation of tree planting schemes, tying the visual and thermal aspects of urban greenery. This approach will allow cities to leverage the power of urban vegetation in the most efficient way, and tame urban heat in a scalable and globally affordable manner.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence</title>
<link href="https://hdl.handle.net/1721.1/163543" rel="alternate"/>
<author>
<name>Dundar Arifoglu, Nasibe Nur</name>
</author>
<id>https://hdl.handle.net/1721.1/163543</id>
<updated>2025-11-06T03:07:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Co-Authoring Beyond the Human: Disordering Architectural Processes through Play and Multi-Agent Co-Existence
Dundar Arifoglu, Nasibe Nur
This thesis reconsiders architectural authorship and the extended processes through which the built environment is shaped, using a series of playful, participatory interventions to expose the human-centric assumptions embedded in spatial decision-making. Presented as a collection of games and booklets, the work invites participants to engage with a wide spectrum of architectural processes—from site understanding and planning to permitting, construction, and post-occupancy—through the perspectives of multiple agents entangled in shared environments. These agents include beings, materials, living organisms, legal frameworks, and other forces typically excluded from spatial authorship, challenging conventional boundaries and expanding the discourse around the entangled forces and relations that shape the spaces we inhabit. A series of playful explorations opens space for friction, misalignment, and shared authorship. Each booklet engages a distinct stage of the architectural process through participatory formats that make visible the biases, exclusions, and regulatory fictions often treated as neutral. By gamifying these systems, the work reveals how architectural decision-making tends to privilege hierarchy, human control, and speed—often at the expense of multispecies co-existence. This thesis positions play as a critical lens: a way to rehearse alternative futures, to listen differently, to embody other perspectives, and to surface the black-box logics embedded in architectural norms. It invites readers and players to participate in unbuilding these assumptions. And the games evolve—with each use, each misreading, each encounter, and each agent who joins the conversation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time</title>
<link href="https://hdl.handle.net/1721.1/163542" rel="alternate"/>
<author>
<name>Chaussabel, Celia Quynh-Mai</name>
</author>
<id>https://hdl.handle.net/1721.1/163542</id>
<updated>2025-11-06T03:06:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Objectiles Guide to Time Travel:&#13;
Re-Envisioning Building Materials as Narrative-Collecting&#13;
Object-Projectiles on a Trajectory Through Space-Time
Chaussabel, Celia Quynh-Mai
As the architectural discipline grapples with its role in resource depletion, carbon emissions, and waste generation, there is a growing urgency to stop sourcing new materials and to reuse materials from existing buildings instead. One challenge to integrating reused materials into current building practices is technical: inventorying, deconstructing, reconditioning, and designing with reused materials is slower and more labor-intensive than with new ones. But another challenge is cultural: the materials that make up architecture are currently perceived as unmoving and single-use, with little consideration for their trajectories from raw resource to landfill. This thesis is focused on developing an aesthetic sensibility and design methodology that helps us re-envision materials as objects on a trajectory instead: Objectiles, or object-projectiles. Objectiles are objects on an adventure across space-time to collect as many uses as possible. Rather than remaining associated with one primary use, Objectiles are impressionable, bearing ambiguous traces of all the uses they encounter as they re-circulate. Through the aesthetic qualities that hint at their many uses, Objectiles invite us to time travel - to imagine the potential past and future narratives that may precede or follow their present physical state. Embedding the aesthetics of Objectiles into architecture can lead to the development of a new collective consciousness of the materials that surround us. They can make us aware that all the objects around us have trajectories that extend beyond their present state, and lead to an alternative material culture of greater care in how we use, re-circulate, and dispose of all objects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits</title>
<link href="https://hdl.handle.net/1721.1/163541" rel="alternate"/>
<author>
<name>Ai, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/163541</id>
<updated>2025-11-06T03:07:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Problem-Independent Regrets on Expectation-Dependent Multi-Armed Bandits
Ai, Rui
The independence axiom (IA) proposed by Von Neumann and Morgenstern [50] is the cornerstone of the expected utility theory. However, some empirical experiments show that the IA is often violated in the real world. We propose a new kind of multi-armed bandit problem where the expectation of outcomes may influence the agent’s utility which we call expectation-dependent multi-armed bandits and rationalize the choice of agents in Machina’s paradox lacking the IA. We design provably efficient algorithms with low minimax regrets and show their consistency of time horizon T with corresponding regret lower bounds, revealing statistical optimality. Furthermore, as we first consider bandits whose corresponding utility depends on both reality and expectation, it provides a bridge between machine learning and economic behavior theory, shedding light on how to interpret some counterintuitive economic scenarios, like bounded rationality explored by Zhang et al. [54].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Differentially Private Synthetic Data Generation for Relational Databases</title>
<link href="https://hdl.handle.net/1721.1/163540" rel="alternate"/>
<author>
<name>Alimohammadi, Kaveh</name>
</author>
<id>https://hdl.handle.net/1721.1/163540</id>
<updated>2025-11-06T03:06:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Differentially Private Synthetic Data Generation for Relational Databases
Alimohammadi, Kaveh
Existing differentially private (DP) synthetic data generation mechanisms typically assume a single-source table. In practice, data is often distributed across multiple tables with relationships across tables. This study presents the first-of-its-kind algorithm that can be combined with \emph{any} existing DP mechanisms to generate synthetic relational databases. The algorithm iteratively refines the relationship between individual synthetic tables to minimize their approximation errors in terms of low-order marginal distributions while maintaining referential integrity; consequently eliminates the need to flatten a relational database into a master table (saving space), operates efficiently (saving time), and scales effectively to high-dimensional data. We provide both DP and theoretical utility guarantees for our algorithm. Through numerical experiments on real-world datasets, we demonstrate the effectiveness of our method in preserving fidelity to the original data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications</title>
<link href="https://hdl.handle.net/1721.1/163539" rel="alternate"/>
<author>
<name>Zhang, Chenhui</name>
</author>
<id>https://hdl.handle.net/1721.1/163539</id>
<updated>2025-11-06T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">VLEO-Bench: A Framework to Evaluate Vision-Language Models for Earth Observation Applications
Zhang, Chenhui
Large Vision-Language Models (VLMs) have demonstrated impressive performance on complex tasks involving visual input with natural language instructions. However, it remains unclear to what extent capabilities on natural images transfer to Earth observation (EO) data, which are predominantly satellite and aerial images less common in VLM training data. In this work, we propose VLEO-Bench, a comprehensive evaluation framework to quantify the progress of VLMs toward being useful tools for EO data by assessing their abilities on scene understanding, localization and counting, and change detection tasks. Motivated by real-world applications, our framework includes scenarios like urban monitoring, disaster relief, land use, and conservation. We discover that, although state-of-the-art VLMs like GPT-4V possess extensive world knowledge that leads to strong performance on open-ended tasks like location understanding and image captioning, their poor spatial reasoning limits usefulness on object localization and counting tasks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art</title>
<link href="https://hdl.handle.net/1721.1/163538" rel="alternate"/>
<author>
<name>Feng, Haozhen</name>
</author>
<id>https://hdl.handle.net/1721.1/163538</id>
<updated>2025-11-06T03:06:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">She Swims in Silence: Spatial Narrative, Women's labor in Contemporary Art
Feng, Haozhen
This thesis investigates the collective lives of Chinese women sent to Xinjiang in state-led migration after 1949 and the erasure of their gendered narratives. Drawing on a unique family history and archival evidence, the thesis reveals how the personal identities of these female “Aid to Xinjiang” participants were stripped away and subsumed under the grand socialist nation-building myth. Through practice-based artistic research, the project attempts to restore their lost voices and unacknowledged suffering and labor, framing the exhibition as a form of praxis. By analyzing the exhibition alongside case studies and critical analysis, the thesis, inspired by Bernard Stiegler’s theory of the “history of representational forms” and interwoven with ideas from philosophers like Judith Butler and Nicholas Mirzoeff, interrogates the gendered silences in official history and highlights the tension between state mythologies and personal memories. In doing so, the exhibition as an interdisciplinary form of research not only restores agency to a silenced group of women, but also demonstrates how artistic practice can serve as an alternative historiography to challenge dominant narratives and recover marginalized voices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles</title>
<link href="https://hdl.handle.net/1721.1/163537" rel="alternate"/>
<author>
<name>Pryal, Erik Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/163537</id>
<updated>2025-11-06T03:06:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluation of Universal Docking Solutions for Autonomous&#13;
Underwater Vehicles
Pryal, Erik Jeffrey
Due to their energy-constrained nature, Autonomous Underwater Vehicles (AUVs) need effective docking and charging stations to extend their mission durations. However, diverse AUV designs challenge the universal compatibility of docking stations. This study provides a framework for understanding what makes a docking station universal and offers two potential solutions: the Tapered Funnel Docking Station and the Magnetic Hub Docking Station. The Tapered Funnel features a conical entry that progressively narrows to accommodate various AUV diameters. The Magnetic Hub passively secures the AUV using magnetic forces and an external appendage guided into position by a square duct. MATLAB simulations evaluate these two charging station designs for compatibility with AUVs, alignment capabilities, and docking efficacy under realistic conditions. Both designs are tested through Monte Carlo simulations to address varying AUV approach conditions, showcasing their potential as universally feasible solutions. Future exploration into material durability, sensor integration, and power transfer efficiency will refine these designs for field applicability. This research lays the groundwork for universal docking standards and proposes adaptable solutions to alleviate operational limitations in underwater missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity</title>
<link href="https://hdl.handle.net/1721.1/163536" rel="alternate"/>
<author>
<name>Blowes, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/163536</id>
<updated>2025-11-06T03:06:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dowel-laminated timber from waste lumber offcuts: &#13;
Towards structural component circularity
Blowes, Rachel
In the context of the global climate crisis, there is a need to develop low embodied carbon building systems. Moreover, construction and demolition generate substantial amounts of waste. The use of salvaged materials for structural applications presents the opportunity to divert this waste while reducing the embodied carbon of new structural components. This thesis proposes a typology for dowel-laminated timber (DLT) slabs built up from waste lumber offcuts. A mechanical model for a segmented DLT system composed of geometrically heterogeneous offcuts is developed. Prototypes of this mass timber system are fabricated and tested to observe their failure behavior and to evaluate the mechanical model. A computational workflow is introduced which employs algorithmic methods for inventory assignment and structural optimization to design slabs which meet deflection requirements under loading. These approaches are undertaken to evaluate whether DLT systems can leverage the irregularity of salvaged lumber dimensions to produce structurally efficient forms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time</title>
<link href="https://hdl.handle.net/1721.1/163535" rel="alternate"/>
<author>
<name>Aubry, Vinzenz</name>
</author>
<id>https://hdl.handle.net/1721.1/163535</id>
<updated>2025-11-06T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Allopoietics in Real Time: Unfolding Among Art, Publics, Space, and Time
Aubry, Vinzenz
This thesis proposes a conceptual lens for understanding contemporary generative arts by introducing the terms Allopoietics and Liquid Media. Building on generative and participatory art, it focuses on the real-time processes among artworks, publics, spaces, and time through which meaning dynamically emerges. Drawing on the author’s artistic works—Conjunktion, Looking at the Sun, and Public Eyes—as well as critical engagement with hermeneutics, process philosophy, and media theory, this thesis explores how agency is distributed across these processes, offering a means to reconsider all elements as equally generative. Allopoietics, derived from cybernetics, describes the generative capacity of systems to produce outcomes beyond the sum of their actants, emphasizing collective unfolding over isolated creation. Liquid Media expands the notion of interfacing beyond traditional media to include publics, space, and time, conceptualizing these as mutable and entangled actants. These concepts outline an Aesthetics of Real Time that evaluates the dynamic relations among increasingly immediate systems. By proposing these new terms, the thesis invites a shift in perspective from object to process: viewing artworks not as stable materializations but as parts of real-time systems of collective meaning-making. While emerging from an artistic practice, this conceptual framework resonates with insights from contemporary sociology and cultural studies, where notions of fluidity, distributed agency, and relationality increasingly shape our understanding of complex systems and realities.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Functional in Vitro Model of the Neuromuscular Interface</title>
<link href="https://hdl.handle.net/1721.1/163532" rel="alternate"/>
<author>
<name>Schwendeman, Laura A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163532</id>
<updated>2025-11-06T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Developing a Functional in Vitro Model of the Neuromuscular Interface
Schwendeman, Laura A.
The neuromuscular system is responsible for the coordination of movement throughout the body, and while research has revealed many of the mechanisms involved in the function of the neuromuscular system, there are still many gaps in our understanding of how all of the components of the system work and how they are affected by environmental factors and disease. This work focuses on developing methods and an in vitro model for studying a subsystem of the neuromuscular system known as the neuromuscular junction (NMJ), which is the connection between skeletal muscle and motor neurons and is relevant in many neuromuscular degenerative diseases. This work identifies that current in vitro NMJ models are cohesively lacking the ability to support long-term, functionally contractile muscle tissue while providing compartmentalization and clear optical access for live imaging of muscle and motor neuron co-cultures. This work therefore presents STAMP, a microgroove patterning method for creating aligned, more physiologically relevant, functional, and optically accessible skeletal muscle tissue cultures on top of fibrin hydrogels. Through investigating a series of different sizing parameters, STAMP is shown to effectively align mouse and human skeletal muscle monolayers in vitro and influence the direction of muscle contraction under electrical and optogenetic stimulation while preserving skeletal muscle tissue integrity and viability. The STAMP approach provides a way to mold hydrogels and the morphology of muscle tissue and will be beneficial for addressing the need for compliant and optically clear substrates in modeling the neuromuscular junction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer</title>
<link href="https://hdl.handle.net/1721.1/163531" rel="alternate"/>
<author>
<name>Sonner, Jessica E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163531</id>
<updated>2025-11-06T03:06:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Turning and Turbulence: A Comparative Study of Agility and Fluid Mechanics in Men’s and Women’s Soccer
Sonner, Jessica E.
Female soccer players demonstrate high levels of agility but remain underrepresented in research and experience anterior cruciate ligament (ACL) tears two to eight times more frequently than their male counterparts [1]. These injuries are often associated with high-torsion movements at the knee, such as quick change-of-direction maneuvers in soccer [2]. To examine gender-based differences in agility, this study introduces an in-game metric based on change-of-direction speeds, derived from center-ofmass tracking data from the 2022 Men’s and 2023 Women’s FIFA World Cups. Results show that across positions, ball proximity, and game segments, female athletes tend to change direction both faster and more frequently than male athletes—supporting current injury hypotheses and informing gender-specific cleat design considerations. Beyond individual movement, this study also examines collective team behavior through a fluid mechanics lens. No significant gender differences were found in power spectral densities or second-order structure functions, suggesting symmetry in the underlying coordination dynamics. A direct cascade was observed in the 0–15m range, indicating a consistent transfer of energy across spatial scales. Team dispersion and the Area-Dominant Spread Index correlated with structure function slopes, bridging spatial metrics with turbulence-based models of group behavior.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leaky Vessels</title>
<link href="https://hdl.handle.net/1721.1/163527" rel="alternate"/>
<author>
<name>Cong, Frank (Haotian)</name>
</author>
<id>https://hdl.handle.net/1721.1/163527</id>
<updated>2025-11-06T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Leaky Vessels
Cong, Frank (Haotian)
This thesis serves as a written synthesis of my art practice. It starts with Louis Pasteur’s swan neck flask, Robert Boyle’s air pump, the theater of proof, and cabinets of natural historians to discuss the intentional gesture of containment, exclusion, and controlled permeability in scientific containers and the knowledge production paradigm behind them. I argue that these containers possess another intrinsic gesture – to leak – that opens space for social and cultural dimensions to engage. I propose “leaky vessels” as an analytical tool and a methodology that foregrounds the tension between intentional and unintentional in order to attend to the issues of care, belief, and labor that arise within this dynamic. Chapter 2 develops the concept of “leaky” in three aspects – aesthetic intervention, historical residue, institutional sabotage – by analyzing art practices by Eve Andrée Laramée, Oron Catts and Ionat Zurr, Candice Lin, Maria Thereza Alves, Critical Art Ensemble, and Claire Pentecost. Each case demonstrates how alternative approaches to apparatuses can expose and unsettle the systems of control that govern knowledge authority, allowing seepage, contamination, and embodied histories to return to spaces designed to exclude them. Chapters 3 and 4 turn inward to examine my own art practice, Guardian and The Guarded (2024), RapidRise (2024), and Sweat Dough (2025). In Chapter 3, I discuss the experience of entering the biomaker space at MIT and cultivating animal cells in a pendant, interrogating how care, proximity, and cosmology might challenge the lab’s sterile and utilitarian logic. Chapter 4 discusses the other two projects that operate outside the lab, where I investigate how bodily entanglement with dough fermentation can leak into the broader context of food cultures, labor histories, and symbolic inheritance. Together, these chapters propose a practice that embraces contamination and relationality. Those that leak in and leak out are precisely where new layers of meaning reside.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of torpedo depth control</title>
<link href="https://hdl.handle.net/1721.1/163520" rel="alternate"/>
<author>
<name>Carleton, John Thomas.</name>
</author>
<id>https://hdl.handle.net/1721.1/163520</id>
<updated>2025-11-05T05:14:46Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Dynamics of torpedo depth control
Carleton, John Thomas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1992; Includes bibliographical references (leaf 72).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the engineering aspects of a wind tunnel magnetic suspension system</title>
<link href="https://hdl.handle.net/1721.1/163518" rel="alternate"/>
<author>
<name>Chrisinger, John Edvil.</name>
</author>
<id>https://hdl.handle.net/1721.1/163518</id>
<updated>2025-11-05T05:14:10Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">An investigation of the engineering aspects of a wind tunnel magnetic suspension system
Chrisinger, John Edvil.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1959; Includes bibliographical references (leaf 62).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/163461" rel="alternate"/>
<author>
<name>Palleiko, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163461</id>
<updated>2025-10-30T03:24:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Evaluation of Skill-Based Imitation Learning&#13;
Policies for Robotic Manipulation
Palleiko, Andrew
Imitation learning is a popular approach for obtaining intelligent robotic policies by learning from human demonstrations. Within this field, there is significant interest in the development of multi-task architectures that can efficiently learn diverse sets of tasks. Skill-based imitation learning methods, which abstract action sequences into ``skill'' representations for planning, offer structural advantages for handling the challenges of multi-task imitation learning that make them an attractive option for this problem. This work presents a novel skill-based imitation learning architecture formulation, with a causal transformer VAE skill-abstraction network paired with an autoregressive transformer planning policy. We find that our skill-abstraction network shows promise in identifying meaningful skills, but that the chosen planning architecture is poorly suited for predicting these skills due to multimodality in the resulting latent space. This is followed by a set of evaluations applied to an existing skill-based method with comparisons to a non-skill-based network on a multi-task dataset. We systematically investigate the performance impacts of six different policy and dataset conditions: data quantity, task variety, retry behavior, control precision, goal representations, and zero-shot transfer. Our experiments reveal limited increases in skill-based policy performance with more demonstrations or task variety, but improvements across architectures through exposure to demonstration retry behavior. Overall, the skill-based architecture demonstrates superior robustness to goal representation variations and low-level process noise than the non-skill-based policy, while neither architecture achieves meaningful zero-shot generalization to novel task combinations. These findings provide insights into the current state of IL methods, with the additional goal of establishing a framework for the evaluation of future multi-task IL architectures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance</title>
<link href="https://hdl.handle.net/1721.1/163460" rel="alternate"/>
<author>
<name>Huang, Dingcheng</name>
</author>
<id>https://hdl.handle.net/1721.1/163460</id>
<updated>2025-10-30T03:24:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Multimodal Streaming Perception: A Real-Time&#13;
Perception Scheduling Framework Based on Relevance
Huang, Dingcheng
In modern human-robot collaboration (HRC) applications, multiple perception modules jointly extract visual, auditory, and contextual cues to achieve comprehensive scene understanding, enabling the robot to provide appropriate assistance to human agents intelligently. While executing multiple perception modules on a frame-by-frame basis enhances perception quality and information gains in offline settings, it inevitably accumulates latency, leading to a substantial decline in system performance in streaming perception scenarios. Recent work in scene understanding, termed Relevance, has established a solid foundation for developing efficient methodologies in HRC. However, modern perception pipelines still face challenges related to information redundancy and suboptimal allocation of computational resources. Drawing inspiration from the relevance concept and the inherent sparsity of information in HRC events, we propose a novel lightweight perception scheduling framework that efficiently leverages output from previous frames to estimate and schedule necessary perception modules in real-time. Our experimental results demonstrate that the proposed perception scheduling framework effectively reduces computational latency by up to 27.52% compared to conventional parallel perception pipelines, while also achieving a 72.73% improvement in MMPose accuracy and comparable YOLO accuracy. Additionally, the framework demonstrates high keyframe accuracy, achieving rates of up to 98% in dynamic scenes. The results validate the framework’s capability to enhance real-time perception efficiency without significantly compromising accuracy. Additionally, the framework shows potential as a scalable and systematic solution for multi-modal streaming perception systems in human-robot collaboration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems</title>
<link href="https://hdl.handle.net/1721.1/163454" rel="alternate"/>
<author>
<name>Lindberg, Ian G.</name>
</author>
<id>https://hdl.handle.net/1721.1/163454</id>
<updated>2025-10-30T03:24:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fluid Sealing Challenges in Solid Oxide Electrolysis Cells and Rapid Swap Battery Systems
Lindberg, Ian G.
This thesis explores the design and development of several mechanical elements relevant to two technologies Important to a global transition to green energy, hydrogen and electric vehicles. The portion of the thesis relating to hydrogen focuses on preloading mechanisms and high temperature seals, two design spaces crucial to the implementation of solid oxide hydrogen generation. Due to the high operating temperatures (600°C - 800°C), seal materials commonly used in other applications are inadequate and glass or vermiculite based seals must be used. The delicateness of these seals makes them a common failure point, and consistent application of a preloading force is key to mitigating this. The concept of a variable-bypass piston is proposed as a preloading mechanism suitable for the high temperatures present inside solid oxide electrolyzer systems, and the development of seal geometries as well as flow characterization of porous steel wool seals to enable parametric design is documented. As an alternative to current sealing methods, initial development of a composite seal utilizing materials and manufacturing methods originating in the semiconductor industry was also conducted. The final section of the thesis proposes the concept and covers initial testing of fluid transfer through a kinematic coupling, a topic of potential interest for implementing liquid pack cooling in a system of rapidly swappable batteries for electric vehicles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots</title>
<link href="https://hdl.handle.net/1721.1/163453" rel="alternate"/>
<author>
<name>Bawa, Maheera</name>
</author>
<id>https://hdl.handle.net/1721.1/163453</id>
<updated>2025-10-30T03:24:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing the Performance of Skeletal Muscle Powered&#13;
Biohybrid Robots
Bawa, Maheera
Skeletal muscle powers all voluntary motion in many living creatures, enabling behaviors such as walking, jumping, swimming, and flying. The field of biohybrid robotics aims to use biological actuators, such as skeletal muscle, to power adaptable robots that respond to their environment. Previous work in this field has focused on deploying 3D skeletal muscle tissues to power robotic function. In natural systems, muscles can also be organized in 2D formats to power a range of movements such as fish-like swimming and peristaltic pumping. However, long-lasting 2D cultures of skeletal muscle have been precluded by force-generating cells delaminating from their underlying substrate. Building on previous work from our lab demonstrating a method to culture contractile skeletal muscle in 2D formats, this work aims to enhance the performance of these systems by tuning substrate stiffness and topography. We show that optimizing system parameters prolongs actuator lifetime and enhances force by 100x.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production</title>
<link href="https://hdl.handle.net/1721.1/163452" rel="alternate"/>
<author>
<name>Fillon, Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/163452</id>
<updated>2025-10-30T03:24:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development and Implementation of a Smart Factory for Educational Fiber Extrusion Device Production
Fillon, Marie
This thesis presents the development and production of FrED (Fiber Extrusion Device), an educational manufacturing system designed to bridge the gap between theoretical instruction and hands-on practice in process control, computer vision, and smart manufacturing. Building on an existing prototype, this work focused on transitioning FrED from a proof-of-concept into a production-ready system by designing scalable workflows, improving hardware and software integration, and developing tools to ensure traceability and repeatability across builds. A major contribution of this thesis was the enhancement and implementation of a smart factory environment capable of supporting batch production. This included designing and deploying applications using Tulip Interfaces to manage inventory, guide subassembly processes, and monitor production metrics in real time. A modular SKU system and structured bin labeling framework were introduced to reduce errors, maintain version control, and support future growth. Station-specific apps were developed and refined to ensure consistent assembly and simplify onboarding across a rotating team of users. In parallel, this thesis contributed to the evaluation and refinement of a vision-based diameter measurement system using a low-cost USB camera. The system was analyzed under various operating conditions and its limitations under motion and variable lighting were quantified. Multiple image processing strategies were explored and robustness metrics were developed to inform future improvements. To ensure pedagogical relevance, the system was tested in user-facing workshops and public demo sessions. Feedback informed updates to both the assembly process and instructional content. By the end of the development cycle, the system supported the successful production of 35 complete FrED units, establishing a replicable model for small-scale manufacturing. This thesis demonstrates how modular digital infrastructure can enable scalable hardware deployment. It also highlights the practical challenges of transitioning from prototype to production and proposes tools and methods that can support broader adoption of smart manufacturing principles in learning environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection</title>
<link href="https://hdl.handle.net/1721.1/163451" rel="alternate"/>
<author>
<name>Sanghai, Rohan S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163451</id>
<updated>2025-10-30T03:24:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Vibration in Omni-Wheels: A Design of Experiments Approach to Optimizing Omni-Wheel Selection
Sanghai, Rohan S.
Omni-wheels, known for enabling holonomic motion in robotic systems, often introduce vibration due to their complex geometry and multiple contact points. Unlike caster wheels with established testing standards, omni-wheels lack comprehensive characterization methods. While parallel studies by Ilkbahar [1] and Donnellan [2] explore their rolling resistance and static load capacity, a systematic analysis of vibration characteristics remains absent from the literature. This thesis presents an investigation of the vibration behavior of various omniwheel designs using a Design of Experiments (DOE) approach. A full factorial experimental design was developed, considering factors such as wheel type, rotational speed, applied load, and wheel orientation angle. Individual regression models were developed for each of six wheel types, treating operational parameters as continuous variables. Vibration levels were measured using root mean square (RMS) acceleration, derived from Fast Fourier Transform (FFT) and Power Spectral Density (PSD) analyses of accelerometer data. Results show that rotational speed consistently increased vibration across all wheel designs, while lateral motion (90° angle) consistently reduced vibration compared to forward motion. The effect of applied load varied significantly between wheel designs, with some wheels showing reduced vibration under load while others remained unaffected. Wheels DZ(1) and Vex(5) demonstrated the lowest average vibration levels, though post-test inspection revealed trade-offs with durability, including roller deformation and material degradation. Interaction effects, particularly between angle and speed, were statistically significant for all wheel types, indicating that the benefits of lateral motion are enhanced at higher speeds. This research provides a framework for optimizing omni-wheel selection to minimize vibration by developing wheel-specific predictive models that quantify sensitivities and interaction effects across various designs and conditions, improving system performance and stability. The findings highlight that wheel selection must consider not only vibration performance but also trade-offs with durability and rolling resistance, establishing vibration characteristics as a critical consideration alongside other performance metrics when selecting omni-wheels.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester</title>
<link href="https://hdl.handle.net/1721.1/163449" rel="alternate"/>
<author>
<name>Scali, William T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163449</id>
<updated>2025-10-30T03:24:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing and Optimizing Magnetohydrodynamic Induction Marine Energy Harvester
Scali, William T.
Magnetohydrodynamic (MHD) power generation presents a promising approach for harvesting energy from marine environments, offering a sustainable alternative for powering naval assets and coastal infrastructure. While energy harvesting technologies are widely used in terrestrial and aerial applications, their implementation in marine environments remains limited. This thesis explores the feasibility of an MHD Inductive Marine Energy Harvester, optimizing its design for undersea naval applications to enhance energy efficiency and reduce carbon emissions with minimized construction costs. A theoretical 2D model was developed based on Maxwell’s equations and Fourier analysis to characterize the physics governing MHD power generation in seawater. This model was extended to multiple concentric gaps on one device, refining predictions of power output under varying flow regimes. Numerical simulations using MATLAB enabled the evaluation of key parameters, including fluid conductivity, magnetic field strength, and shroud design, to optimize energy conversion efficiency. Furthermore, geographical and coastal tide analyses were conducted to determine optimal deployment locations, maximizing power extraction from natural marine currents. Economic viability was assessed through a cost-benefit analysis, comparing the energy yield per unit cost of the harvester against existing renewable energy technologies and other maritime power sources. Results indicate that under specific conditions, MHD generators can effectively supplement energy demands, reducing reliance on conventional fuel or other electrical power sources. The findings of this research contribute to the advancement of marine renewable energy technologies, demonstrating the potential of MHD induction-based harvesting as a scalable solution for sustainable power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation</title>
<link href="https://hdl.handle.net/1721.1/163448" rel="alternate"/>
<author>
<name>Hall, Jeff</name>
</author>
<id>https://hdl.handle.net/1721.1/163448</id>
<updated>2025-10-30T03:24:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Solar-Powered Critical Cooling: A Theoretical Feasibility Study for Human Thermal Regulation
Hall, Jeff
Over the last 50 years, the leading global environmental hazard has not been hurricanes, lightning, tornadoes, floods, or earthquakes, but extreme heat events. With climate models projecting an increase in the frequency, intensity, and duration of heatwaves in the coming decades this threat to life is expected to only increase. Air conditioning has been demonstrated to reduce mortality during heatwaves yet uses an order of magnitude more energy than necessary to keep a human cool. Using principles of similitude to extrapolate the capability of existing vapor compression equipment, an objective function to maintain energy balance in a human exposed to extreme heat is developed across a design space. The function shows that in a standard forced convection air conditioning system, there no opportunity to provide emergency cooling of a human due to the slow mass flow rate needed to cool air in a single stream. As such, status-quo attempts to cool humans with general-purpose air conditioning will always be an inefficient use of energy. By focusing on keeping people cool, not spaces, we propose three paths forward for critical human cooling that appropriately match the energy needs of humans: radiative cooling, liquid cooling devices, and low-mass flow air conditioning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly</title>
<link href="https://hdl.handle.net/1721.1/163446" rel="alternate"/>
<author>
<name>Almquist, Ethan T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163446</id>
<updated>2025-10-30T03:24:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Behavioral Methods for Next-Generation Shipboard Power System Simulation: Letting SPARCS Fly
Almquist, Ethan T.
Design requirements on modern naval platforms are increasing the complexity and criticality of onboard electric plants. They form the backbone of warship operational capability and are at the heart of maritime decarbonization. Tasks such as assessing the ship's capacity in a damaged state, optimizing the mission profile of a fleet of vehicles, and evaluating broad design spaces in an efficient manner are increasingly difficult as electric network complexity increases. Traditional modeling techniques are either too computationally expensive, or lack the fidelity necessary to produce meaningful insights into the electric network's operation. Behavioral modeling bridges this gap, but is underdeveloped to support the system architectures of tomorrow's ships. This work details the advancement of behavioral modeling of electrical systems to incorporate hybrid AC/DC and ring bus architectures, the development of parallelization techniques, and SPARCS: a software package offering Shipboard Parallelized Analytics with a Rapid Configuration Simulator.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video</title>
<link href="https://hdl.handle.net/1721.1/163443" rel="alternate"/>
<author>
<name>Chityat, Inbar</name>
</author>
<id>https://hdl.handle.net/1721.1/163443</id>
<updated>2025-10-30T03:24:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multimodal Non-Contact Sensing of Neonatal Vital Signs&#13;
Using Radar and Video
Chityat, Inbar
Preterm neonates represent a vulnerable population which traditional contact-based monitoring devices are not optimized for their small size and complicated physiology. Adhesive sensors and wires can cause infections, discomfort, and impair the delivery of clinical care. Therefore, these most fragile patients could significantly benefit from remote health monitoring. This thesis establishes the foundation for a multimodal device designed for noncontact monitoring of neonates in the Neonatal Intensive Care Unit (NICU) that integrates a video camera and a radar. The device is used to estimate vital signs such as respiratory rate (RR), using both unimodal (solely video or radar) and multimodal fusion approaches that combine data from both sensors. Preliminary testing was conducted on neonatal simulator mannequins, followed by a clinical study at Tufts Medical Center NICU which collected data from 16 neonates so far (with the goal of reaching 20). The collected data was processed, labeled, and organized using image processing techniques and manual review, and then analyzed using a Video Vision Transformer (ViViT) architecture, incorporating early, intermediate, and late fusion strategies. Initial analysis was conducted on the mannequin data and the first neonatal subject. The results show that for estimating RR in neonates, the early fusion approach outperformed the unimodal methods. In movement detection, compared to human labeling, the fusion techniques achieved high accuracy and precision. To conclude, this study demonstrates that multimodal analysis has the potential to outperform unimodal approaches by improving accuracy against gold standard monitoring, particularly in challenging real-life conditions, including motion artifacts and poor lighting. This work represents a step toward more robust, non-invasive monitoring solutions for neonatal care, with implications for broader applications in remote health monitoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures</title>
<link href="https://hdl.handle.net/1721.1/163442" rel="alternate"/>
<author>
<name>Finlason, Katana R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163442</id>
<updated>2025-10-30T03:24:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incubators for Species which Exhibit Temperature-Dependent Sex&#13;
Determination: Application to Hawksbill Sea Turtles in Rising Ambient Temperatures
Finlason, Katana R.
As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products</title>
<link href="https://hdl.handle.net/1721.1/163441" rel="alternate"/>
<author>
<name>Edington, David J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163441</id>
<updated>2025-10-30T03:24:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimal Constraint and Precision Placement in Cyclic Testing of High Life-Cycle Products
Edington, David J.
In the electrification of heavy industry, rapid swappable batteries provide an effective means to minimize vehicle downtime and the cost of operation. However, to allow this technology to take hold, further development of electrical contacts that can both pass high amperage and undergo a high cycle life needs to occur. The development of these electrical contacts is a highly experimental process, and thus establishing a method and test equipment to determine the physical and electrical characteristics of these contacts over their lifetime will allow for the accelerated development of these products. This body of work serves as a design guide to establish a physical testing mechanism to assess contact resistance degradation and physical wear over the lifespan of an electric connector. Data will then be collected on initial contact prototypes to characterize their performance. With this data, designs may be iterated and improved upon in pursuit of creating a universal standard for battery swap technology on electric vehicles in heavy industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system</title>
<link href="https://hdl.handle.net/1721.1/163439" rel="alternate"/>
<author>
<name>Kim, Beomjun</name>
</author>
<id>https://hdl.handle.net/1721.1/163439</id>
<updated>2025-10-30T03:24:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Distinct roles for energy storage and transmission&#13;
infrastructure in a renewables-based electric power system
Kim, Beomjun
Due to the intermittency of renewable resources, achieving a high coverage of renewable generation at low cost is one of the main hurdles to realizing zero-carbon electricity generation. In this study, we analyze the roles of energy storage systems (ESS) and transmission infrastructure in the cost-optimal deployment of a renewable electricity grid in the United States. We find that storage and transmission serve distinctly different functions: transmission is useful for addressing hours-long resource lows, but only plays a supplementary role in mitigating long-duration resource lows. Conversely, storage can handle both short-duration and long-duration resource lows. These different functions are driven in part by the large spatial footprints of the most extreme long duration resource lows. Furthermore, the total cost of renewable energy in the system and the cost-determining technological components in the system are dependent on the renewables penetration toward total demand—known as the energy availability factor (EAF). When the EAF is sufficiently low, the cost of a cost-optimized system is driven solely by generation costs. For low to intermediate EAF, both generation and transmission costs are dominant factors. At high EAF, generation and storage costs become the dominant factors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wedged Vortex Generator Applications for Marine Vessels</title>
<link href="https://hdl.handle.net/1721.1/163438" rel="alternate"/>
<author>
<name>Kimmeth, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/163438</id>
<updated>2025-10-30T03:24:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wedged Vortex Generator Applications for Marine Vessels
Kimmeth, Jack
This thesis investigates the effectiveness of vortex generators (VGs) in reducing viscous drag in hydrodynamic applications. Initial experimental and computational fluid dynamics analyses identified wedge-shaped VGs as the optimal design for flow manipulation. Comparative testing of three wedge shaped VG sizes at 1.3 m/s revealed the most effective configuration, which was subsequently evaluated across speeds ranging from 1.0 m/s to 1.6 m/s. The results showed a viscous drag reduction of 7.9% at 1.4 m/s. These findings were extrapolated to a full-scale bulk carrier using appropriate geometric and dynamic scaling factors. Total resistance was partitioned using Holtrop-Mennen approximations, allowing the drag reduction to be realistically applied to operational conditions on a trans-Pacific route. Material and installation cost estimates were also developed. Finally, implications for propulsion efficiency, flow-induced vibrations, and cavitation are discussed, with recommendations for future self-propelled model testing to further explore these effects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prosody in Kichwa</title>
<link href="https://hdl.handle.net/1721.1/163437" rel="alternate"/>
<author>
<name>Chango Masaquiza, Soledad</name>
</author>
<id>https://hdl.handle.net/1721.1/163437</id>
<updated>2025-10-30T03:24:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prosody in Kichwa
Chango Masaquiza, Soledad
This thesis investigates the prosodic system of Salasaka Kichwa, focusing on the interaction between pitch, morphosyntactic structure, and word order in both elicited and spontaneous speech. Based on data from ten native speakers of the Salasaka community, the study analyzes approximately 150 utterances using Praat and ToBI-style prosodic annotation. The findings reveal a consistent alignment between the nuclear pitch accent and the leftmost constituent of the verb phrase in neutral declarative sentences, supporting the hypothesis that Salasaka Kichwa exhibits a head-final syntactic structure. This default prosodic alignment is disrupted by the presence of focus-sensitive or interrogative morphemes such as -mi and -chu, which reliably attract the pitch peak regardless of their position in the clause. In ditransitive constructions, pitch prominence consistently targets the dative-marked argument. Accusative-marked objects also receive prominence, but only when modified; in such cases, it is typically the modifying adjective or contrastive element that bears the highest pitch. Overall, the study demonstrates that prosodic prominence in Salasaka Kichwa is not governed by syntactic structure alone. Instead, it emerges from a layered interaction between morphology, information structure, and pragmatic marking offering new insights into how prosody encodes grammatical and communicative functions in underdescribed head-final languages.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging</title>
<link href="https://hdl.handle.net/1721.1/163436" rel="alternate"/>
<author>
<name>Nguyen, David H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163436</id>
<updated>2025-10-30T03:24:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Model Predictive Control Approaches for Dynamic Table&#13;
Tennis Swinging
Nguyen, David H.
This thesis presents three model predictive control (MPC) formulations for robotic table tennis swinging, addressing the challenge of generating precise, real-time paddle trajectories for dynamic ball interactions. We explore key differences in optimization structure, solver strategy, and real-time implementation, evaluating each approach through hardware experiments that measure strike condition tracking and hit success. The final controller integrates the full task of a table tennis possession by planning the return ball trajectory through the contact dynamics, and generating a swing to achieve it. This controller improves the hit rate of the system from 88.3% to 97.6% and significantly enhances strike condition accuracy and smoothness enabling control over the landing location and spin of the ball.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots</title>
<link href="https://hdl.handle.net/1721.1/163434" rel="alternate"/>
<author>
<name>Johnston, Julie E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163434</id>
<updated>2025-10-30T03:24:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling the Impact of Helicopter Vibrations on the Musculoskeletal Health of US Army Blackhawk Helicopter Pilots
Johnston, Julie E.
The UH-60, used for troop transport, MEDVAC, and mission control, has evolved over the last 45 years from the Alpha Model to the Lima and Mike models that are currently utilized. Previous studies investigated the impact of Whole-Body Vibrations (WBV) on aviators and the resulting musculoskeletal injury, but none have investigated the efficacy of the Mike model’s Active Vibration Control System (AVCS) on reducing the impact of helicopter vibrations on musculoskeletal health.&#13;
Computational analyses of a biomechanical model using OpenSim and motion capture at varying levels of vibration was conducted. This quantifies the response of the spine and the surrounding muscles when vibratory loads are applied while positioned to manipulate the flight controls. A musculoskeletal model was developed to represent the aviator in the seated posture required to effectively manipulate the flight controls. To develop the model, the team recorded motion capture data with a pilot in a pilot test for concept validation. This data was then processed and input in the OpenSim inverse kinematics tool to determine joint angle and to demonstrate the muscle-tendon length of several muscles in the back. Unlike the initial predictions, the muscles in the right side of the back were not consistently longer than those of the left side. &#13;
A survey was also developed that builds upon previous efforts, seeking to understand the aviator’s perspective on musculoskeletal injury and prevention, with a focus on the back. Aviators are asked to describe the cause of their injury, methods of injury prevention, and recovery techniques encompassing numerous subpopulations of flight experience: Lima-majority, Mike-only, Mike-majority, and an even mixture of L/M. The data attempts to characterize the impact of the AVCS on aviator spine health. The AVCS should decrease the rate of injury by reducing the vibratory loads experienced by the aviator. This survey is unique to previous questionnaires as it focuses on the user’s perspective of differences between the two models, and the injury or pain felt by each service member.&#13;
While it was expected to see a trend of reduced injury occurrence amongst the Mike-only aviators versus those with Lima-majority flight hours, this was not the case. Injury prevalence was consistent across most populations, indicating the potential inefficacy of the AVCS. Analysis of open-ended responses, particularly from the hybrid group, provide some context for the perceived impacts of using the AVCS. Some population demographics were not represented in this survey due to the nature of the unit being surveyed, which may impact the validity of some results.&#13;
By quantifying the perceived efficacy of the AVCS as it relates to chronic musculoskeletal injury using a survey of pilot experience factors (flight hours, airframes, operating theatres, etc.), and by representing the maladaptive posture of the pilots with a computational simulation based on experimental pilot data; a full picture is developed of the risk of issue related to the near and long-term health of US Army Aviators. The aim is to expand the overall understanding of how vibration is impacting the musculoskeletal health of aviators and their perceived impact on lifelong health from the profession. The ultimate goal is to aid in the design of additional countermeasures to improve aviator spine health and to serve as a platform for optimization of systems like AVCS.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices</title>
<link href="https://hdl.handle.net/1721.1/163432" rel="alternate"/>
<author>
<name>Hoo, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/163432</id>
<updated>2025-10-30T03:24:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hydrodynamic Behavior of Pop-Up Satellite Archival Tags (PSAT) Subject to Vortices
Hoo, Stephanie
Pop-up Satellite Archival Tags (PSATs) are a combination of satellite and archival tags used by marine biologists to collect large scale movement and behavioral data of large pelagic life for up to two years [1]. However, current commercial PSATs have an unusually high failure rate when tagged on tuna and cost upwards of $4000, making it both difficult and expensive to collect data [14]. Upon investigation, the top two failure modes of tuna-affixed PSATs have been identified as drag from movement/tissue healing and pressure cycling [14]. Current commercial PSAT manufacturers do not account for the vortices shed by fish when testing their designs— a large oversight that could account for their high failure rate [15]. The work herein determined the effects of vortex shedding on PSAT hydrodynamic behavior, used these results to inform the design of novel PSAT body shapes, and conducted a head-to-head comparison of these designs with existing commercial PSATs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats</title>
<link href="https://hdl.handle.net/1721.1/163431" rel="alternate"/>
<author>
<name>Buchanan, Maxwell Calvin</name>
</author>
<id>https://hdl.handle.net/1721.1/163431</id>
<updated>2025-10-30T03:24:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Combating Corrosion and Monitoring Microgrids on Coast Guard Patrol Boats
Buchanan, Maxwell Calvin
Marine corrosion presents a persistent threat to the reliable operation of U.S. Coast Guard Fast Response Cutters (FRCs). This thesis investigates hybrid cathodic protection strategies combining impressed current cathodic protection (ICCP) systems and sacrificial zinc anodes to combat corrosion on such vessels. Observing over 550 cumulative months of ICCP system data across 46 FRCs, this thesis identifies operational trends, failure modes, and unique regional behaviors. To validate observed patterns and explore failure scenarios, the study implements finite element modeling using COMSOL Multiphysics. These simulations replicate normal operation, reference electrode failure, propeller passivation, localized zinc loss, and hull coating failure for both a generic 35m hull and the FRC hull. These models emphasize how system behavior responds to material variations, temperature, and system health, offering a diagnostic framework for optimizing ICCP configurations. Field and laboratory experiments further ground the computational findings. These include shipboard hull potential surveys and analysis of zinc anode wastage across multiple cutters. Controlled experiments on nickel aluminum bronze (NAB) passivation using miniaturized ICCP test systems are explored for further study. Initial results show variation in zinc consumption and corrosion behavior depending on ICCP setpoints, with higher protection levels (-1050 mV) often correlating with reduced zinc depletion. The thesis also explores energy diagnostics onboard FRCs via non-intrusive load monitoring (NILM). A case study on the USCGC WILLIAM CHADWICK describes monitoring auxiliary machinery loads through NILM signatures and suggests expansion to critical panels and DC systems. By integrating fleet data, physical experimentation, and simulation, this thesis advances future efforts in patrol boat corrosion monitoring, ICCP optimization, and resilient microgrid management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design</title>
<link href="https://hdl.handle.net/1721.1/163430" rel="alternate"/>
<author>
<name>Burgess, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163430</id>
<updated>2025-10-30T03:24:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incorporating High-Resolution Tactile Perception for Performative and Generalized Robotic Manipulation Through Compliance Estimation and Hardware Design
Burgess, Michael
In robotics, replicating the natural proficiency with which humans perform manipulation tasks has proven challenging. Modern control schemes are predominantly learning-based and thus depend heavily on data collected via teleoperated demonstrations. Humans rely on our tactile perception to perform contact-rich and dynamic manipulation tasks. By more seamlessly incorporating high-resolution tactile sensing and haptic feedback into teleoperation interfaces, we can work to create stronger demonstration data to support the development of more effective learned control policies. In this thesis, we present two contributions toward this goal. First, we develop an algorithm to estimate the compliance of grasped objects in real-time from tactile images to provide haptic feedback to remote users. This algorithm combines both analytical and learning-based approaches to better generalize across both object shapes and materials. Second, we create a 1-DoF robotic gripper design with integrated tactile sensing. Inspired by the principle of self-similarity, this gripper is designed to better conform to complex object geometries than traditional designs and more securely grasp objects of many shapes and sizes. Together, these contributions can be utilized to create robust, tactile-aware teleoperation platforms. These platforms would facilitate more effective data collection and thereby promote the development of more performative autonomous action in generalized robotic manipulation scenarios.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits</title>
<link href="https://hdl.handle.net/1721.1/163428" rel="alternate"/>
<author>
<name>Turliuk, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163428</id>
<updated>2025-10-30T03:23:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Net Climate Impact of AI: Balancing Current Costs with Future Climate Benefits
Turliuk, Jennifer
What is the net impact of artificial intelligence on climate change? Existing studies focus on AI's footprint, but few analyze AI's trade-offs. This paper develops a framework to quantify both the Greenhouse Gas (GHG) emissions and the climate change costs and benefits of AI systems, addressing the time value of carbon and the installed base of existing AI infrastructure. We examine the energy demands of AI, which are growing rapidly and threatening companies' net-zero commitments, while also analyzing AI's potential to enable emissions reductions through applications such as optimized energy systems, demand response, grid management, and electrification acceleration. This research introduces the Net Climate Impact Score (NCIS) of AI, a novel equation to calculate the net climate impact of AI technologies that considers both immediate emissions and potential future benefits, and provides a methodology for assessing AI projects holistically. We demonstrate that while current AI applications are predominantly emissions-intensive, strategic deployment focused on energy system transformation could potentially deliver net climate benefits within specific time frames and applications. However, improvements in energy efficiency and emissions reductions resulting from AI are, absent climate policy, likely to generate both direct and indirect rebound effects that could undermine the emissions reductions and reduce the climate benefits of AI. The research concludes with policy and industry recommendations that propose technological pathways that could maximize AI's positive impact while minimizing its environmental footprint.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications</title>
<link href="https://hdl.handle.net/1721.1/163427" rel="alternate"/>
<author>
<name>Pressel, Adam Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/163427</id>
<updated>2025-10-30T03:24:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wide Range Switched Mode RF Power Amplifiers and&#13;
their applications
Pressel, Adam Jay
Switched-mode power amplifiers (SMPAs) are desired that can work across a wide range of power levels and load impedances with fast response speed while maintaining high efficiency. Such designs would be valuable for many applications including plasma generation and wireless power transfer. We introduce a new wide-range SMPA architecture that provides direct output voltage modulation, enabling it to modulate output power and compensate for resistive load variations. Dynamic frequency modulation is leveraged to address reactive load variations. The new architecture enables all the semiconductor switches to maintain zero-voltage switching across all operating conditions. Experimental results shows that the wide-range half bridge power amplifier was able to deliver a wide power range of 25 W - 95 W power range across each individual resistive load in the range of 5 Ω - 20 Ω with up to j15 Ω reactance. The maximum dc-ac efficiency is 86 with 20 Ω load and 110.5 W load power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine</title>
<link href="https://hdl.handle.net/1721.1/163426" rel="alternate"/>
<author>
<name>Mannier, Robert B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163426</id>
<updated>2025-10-30T03:24:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Tension-Leg Platform for Offshore Diffusor-AugmentedHydrokinetic Turbine
Mannier, Robert B.
Harnessing marine energy offers significant potential for advancing clean and sustainable power generation. This thesis focuses on the design and optimization of a diffuser-augmented hydrokinetic turbine, supported by a tension-leg platform, to harness ocean and tidal currents for renewable energy production. By incorporating diffuser technology, the turbine’s efficiency is enhanced, increasing the coefficient of power and enabling effective energy capture even in environments with lower current speeds.&#13;
The research involves 2D and 2D axisymmetric modeling of the diffuser and turbine using Actuator Disk Theory (ADT), with tools such as Rhino and Star CCM+. Mounted on a floating tension-leg platform anchored to the seabed, the turbine is designed to exceed the Betz limit, maximizing power output and advancing offshore energy harvesting capabilities.&#13;
This thesis is solely focused on the design and optimization of the hydrokinetic turbine, providing an in-depth analysis of diffuser performance. The findings contribute to the development&#13;
of marine renewable energy technologies, promoting sustainable and efficient power generation from ocean and tidal currents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation</title>
<link href="https://hdl.handle.net/1721.1/163422" rel="alternate"/>
<author>
<name>Trono Figueras, Renato</name>
</author>
<id>https://hdl.handle.net/1721.1/163422</id>
<updated>2025-10-30T03:23:48Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">On the Application of an Output-based Adaptive, Higher-order Finite Element Method to Sonic Boom Propagation
Trono Figueras, Renato
The reduction of sonic boom loudness to within acceptable limits is a crucial factor for the viability of supersonic aircraft. This thesis presents a computational framework for simulating sonic boom propagation using an output-based adaptive, higher-order finite element method. The research employs the Variational Multiscale with Discontinuous Subscales (VMSD) method, integrating Continuous Galerkin (CG) and Discontinuous Galerkin (DG) features, referred to as VMSD-BR2. This approach leverages static condensation to manage computational cost while utilizing DG stabilization techniques for enhanced stability and adjoint consistency. A key component of this work is the application of the dual weighted residual (DWR) method for output error estimation, which in turns drives the mesh optimization process. The method’s efficacy is validated using smooth solutions for the viscous Burgers equation and the adjoint PDE for a volume output functional. Additionally, artificial viscosity is incorporated via a shock sensor PDE approach to handle shock presence, with necessary corrections applied to the DWR error estimate. The VMSD-BR2 method is applied then to a real-world scenario solving the augmented Burgers equation, which models the propagation of sonic booms. The results include the pressure perturbation field, adapted meshes, ground-level B-SEL filtered pressure, and perceived loudness at ground, demonstrating the method’s practical application.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>C. elegans as a Platform for Multimodal Neural Data Integration</title>
<link href="https://hdl.handle.net/1721.1/163421" rel="alternate"/>
<author>
<name>Simeon, Quilee</name>
</author>
<id>https://hdl.handle.net/1721.1/163421</id>
<updated>2025-10-30T03:24:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">C. elegans as a Platform for Multimodal Neural Data Integration
Simeon, Quilee
Systems neuroscience has traditionally been fragmented into investigations at discrete levels of organization, creating methodological and conceptual gaps that hinder unified understanding of neural function. This thesis examines the nematode Caenorhabditis elegans as a platform for integrating diverse neural data modalities, offering a pathway to bridge these gaps. The hermaphrodite C. elegans, with its completely mapped connectome, optical transparency, genetic tractability, and stereotyped nervous system of only 302 neurons, presents an opportunity for comprehensive measurements across multiple dimensions of neural function. The review is organized around three fundamental neural data modalities accessible in C. elegans: (1) molecular genetic profiles, (2) network connectivity, and (3) neural activity dynamics. Historically studied in isolation, these complementary data types are increasingly being bridged through technological and computational innovations. We examine experimental advances enabling whole-nervous-system measurements of these modalities, as well as data standardization efforts and computational frameworks for cross-modal integration. While understanding the relationship between neural activity and behavior remains a fundamental goal of systems neuroscience, this thesis focuses on neural data acquisition and integration rather than behavioral analysis, which has been extensively covered elsewhere.1 We conclude with some original proposals to overcome current limitations in multimodal data acquisition and synthesis, and suggest future directions toward a holistic understanding of how molecular components, network connectivity, and cellular physiology collectively give rise to neural function in C. elegans. These integrative approaches establish a roadmap that may eventually scale to more complex nervous systems and advance our understanding of neural computation across species.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World</title>
<link href="https://hdl.handle.net/1721.1/163420" rel="alternate"/>
<author>
<name>Sutcliffe, Douglas</name>
</author>
<id>https://hdl.handle.net/1721.1/163420</id>
<updated>2025-10-30T03:24:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Nickel Short: Rethinking Element Scarcity in Pursuit of a Fusion-Powered World
Sutcliffe, Douglas
Fusion energy presents a promising solution for current global decarbonization goals. This thesis presents an adaptable model for evaluating mineral sufficiency in the global deployment of fusion power. Using the ARC Magnetic Confinement (MC) Deuterium-Tritium (D-T) fusion concept as a framework, this research integrates mineral usage estimates from the International Energy Agency (IEA) with MIT Energy Initiative’s (MITEI) energy production forecasts by generation technology. Using MITEI’s $2,800/kW cost scenario for fusion power generation, the model situates the demand for fusion-critical minerals within the broader context of growing mineral needs driven by the clean energy transition, and offers specific, quantitative insights into mineral sufficiency risks. The study finds that beryllium will face significant shortages solely due to fusion demand, with resource exhaustion projected to occur within 40 years. When accounting for additional demands from Electric Vehicles (EVs), battery storage, and transmission infrastructure, chromium and nickel are projected to exhaust economically extractable reserves within 21 to 35 years at current prices. The research further reveals that for nine of the thirty elements evaluated, over 50% of production is concentrated in a single country, and for half of the minerals China is the largest producer, introducing geopolitical risks. Notably, at just 13 kg per reactor, the demand for Rare Earth Elements (REEs) is not exposed to a significant risk, even without the top producing country. The research also surfaces current reactor designs and strategies which could help mitigate each identified risk.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.</title>
<link href="https://hdl.handle.net/1721.1/163419" rel="alternate"/>
<author>
<name>Espinal, Michael A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163419</id>
<updated>2025-10-30T03:23:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">As global ambient temperatures continue to rise, with the highest recorded annual averages since 1850 being within the last ten years, problems emerge for species exhibiting temperature-dependent sex determination. This is the process by which the sex of an animal’s embryo is determined based on the temperature of environment in which it is incubated, which can result in skewed sex ratios within a population like in the case of the critically endangered Hawksbill Sea Turtles (Eretmochelys imbricata). Reportedly, 85-95% of Hawksbills sampled in the wild are currently female [3]. This sex-imbalance can negatively impact the species’ ability to procreate, leading to the potential for extinction. Currently, no viable, long-term solutions exist to effectively and safely cool sea turtle eggs while still keeping them within their natural habitat. This research proposes the creation of sea turtle egg incubators designed to achieve a temperature range that will produce a higher percentage of male hatchlings to help rectify this imbalance in habitats heavily affected by climate change. These incubators are designed to be affordable, easy to build and, most importantly, safe for the sea turtle eggs. Three-month-long temperature trials for the incubator were conducted in Jamaica with conservationist community partners at Oracabessa Bay Sea Turtle Project. Results showed that this incubator is not only easy to manufacture and use, but that it successfully regulates the temperature range in favor of more male hatchlings, while also increasing the emergence rate of the hatchlings from 70% in natural nests to over 80%. During one of the hottest months in Jamaica, the incubator, deployed without water changes, doubled the predicted percentage of males produced by natural nests. When provided with cool water changes the incubator quintupled this value. Throughout the months of August to October, the incubator achieved a temperature range that is predicted to produce 85-99% male hatchlings, thus counteracting the feminization phenomenon occurring in nature.
Espinal, Michael A.
Foams, widely used in packaging, insulation, protective gear, and medical implants, are versatile materials but mechanically inefficient due to their bending-dominated microstructure, leading to an exponential loss of stiffness and strength at low relative densities. Architected materials address this limitation through engineered microstructures that achieve near-linear scaling of properties with relative density. However, truss- and plate-based designs suffer from stress concentrations, while shell-based architectures, though more mechanically efficient, remain highly sensitive to defects and are challenging to fabricate at scale via additive manufacturing. Spinodal architected materials, derived from scalable spinodal decomposition processes, offer a promising alternative with aperiodic, double-curvature microstructures that enhance mechanical efficiency at low relative densities. Nevertheless, their behavior beyond the elastic regime remains largely unexplored. This thesis investigates the nonlinear mechanics of spinodal architected materials by combining a comprehensive experimental dataset with computational modeling. A total of 107 unique morphologies were fabricated and subjected to uniaxial compression along three principal directions, resulting in a dataset of 321 stress-strain curves. Morphologies were generated via simulated spinodal decomposition, allowing controlled variation of anisotropy. Explicit finite element simulations, validated against experimental data, revealed that plastic energy dissipation dominates the large-strain mechanical response. To quantitatively link local morphology to global mechanical behavior, we introduce the Normal Participation Factor (NPF) — a scalar geometric parameter that captures the alignment between surface normals and the loading direction. We demonstrate that the NPF is a material-agnostic proxy for equivalent plastic strain and is linearly correlated with the total energy dissipated during deformation. Combining insights from both experiments and simulations, we establish the NPF as a first-order predictive tool for mechanical behavior under large strains, enabling structure-property predictions without reliance on costly simulations or extensive experimental testing. Altogether, this work lays the foundation for developing finite-strain structure-property relationships in spinodal architected materials, advancing their potential for real-world applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery</title>
<link href="https://hdl.handle.net/1721.1/163415" rel="alternate"/>
<author>
<name>Co, Dominic Lim</name>
</author>
<id>https://hdl.handle.net/1721.1/163415</id>
<updated>2025-10-30T03:23:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mapping Informality: An Approach to Classifying Sidewalk Informal&#13;
Practices and Elements Through Street View Imagery
Co, Dominic Lim
By 2050, the United Nations estimates that 68 percent of the world’s population will live in cities, with 90 percent of that growth concentrated in rapidly urbanizing informal communities across Africa, Latin America, and Asia. In these contexts, informality, defined as unregulated commerce, adaptive reuse of space, incremental construction, and self-organized infrastructure, shapes the everyday choreography Jane Jacobs called the “sidewalk ballet.” Yet because governments rarely collect census-grade data on such activity, informality remains poorly documented and weakly understood. This thesis introduces a transferable computational framework to formalize informality by transforming street imagery into an auditable taxonomy of informal street-level elements, activities, and practices. The framework is tested in two contrasting districts, i.e. District 1 and District 5 of Ho Chi Minh City, where sidewalks are highly contested by vendors, pedestrians, and regulators. The contribution of this thesis is two-fold. First, this thesis contributes a three-stage pipeline for classifying sidewalk informality. Using Seesaw (Moll et al., 2022), a CLIP-based feedback loop retrieves and soft-labels candidate scenes. This is followed by manual verification and fine-tuning a lightweight ResNet on binary categories (e.g. stationary vs mobile vendors, etc.). Compared to the zero-shot model Qwen-VL-Max, the fine-tuned ResNet delivered more balanced performance (precision/recall: 0.62– 0.78) and better handled nuanced, context-sensitive distinctions. In contrast, Qwen-VL-Max favored recall and object salience but struggled with subtle or spatial cues like mobile vs. stationary setups. Second, this thesis also developed a taxonomy and annotated dataset of informality which was used to reveal spatial inequities in sidewalk use. By converting curbside complexity into structured, updateable categories, the framework enables planners to recognize the adaptive value of informal practices, target genuine hazards, and design interventions for more equitable urban planning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/163414" rel="alternate"/>
<author>
<name>Dickerman, Matthew F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163414</id>
<updated>2025-10-30T03:23:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Nuclear Microreactor-Powered Container Ships for Maritime Decarbonization
Dickerman, Matthew F.
The maritime shipping industry, responsible for approximately 3% of global greenhouse gas emissions, faces growing pressure to achieve net-zero emissions by 2050 under the International Maritime Organization (IMO) framework. Alternative fuels such as liquefied natural gas, ammonia, and methanol present challenges related to energy density, infrastructure, safety, and cost. Nuclear microreactors offer high energy density, zero operational emissions, and multi-year endurance, but require coordinated regulatory development and stakeholder engagement for commercial adoption.&#13;
&#13;
This thesis evaluates the feasibility of integrating microreactors into container ship designs employing electric propulsion and standardized intermodal logistics. Holos-Quad microreactors are selected based on their modular architecture, transportability, and compatibility with marine operations. Detailed ship concepts are developed for Feeder, Panamax, and New-Panamax classes, accompanied by a phased fleet development strategy.&#13;
&#13;
Economic modeling compares the lifecycle costs of conventional and microreactor-powered ships, incorporating capital expenditures, operating costs, financing assumptions, and carbon pricing. Fleet-level analysis indicates that microreactor-powered ships can achieve comparable or improved profitability while eliminating nearly 44 million metric tons of CO2e emissions across a ten-ship fleet. Sensitivity analyses confirm the robustness of these results across a wide range of future scenarios.&#13;
&#13;
By integrating stakeholder analysis, technical feasibility assessments, and economic modeling, this research establishes a commercially viable framework for zero-emission nuclear-powered shipping, offering a scalable pathway toward sustainable maritime operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A magnetic levitation testbed for development of real-time control frameworks applied in fusion</title>
<link href="https://hdl.handle.net/1721.1/163413" rel="alternate"/>
<author>
<name>Lee, Yehoon</name>
</author>
<id>https://hdl.handle.net/1721.1/163413</id>
<updated>2025-10-30T03:23:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A magnetic levitation testbed for development of real-time control frameworks applied in fusion
Lee, Yehoon
This thesis presents the development of a magnetic levitation device as a hardware-in-theloop platform to be used for research in Control and Data Acquisition frameworks applied to fusion experiments. Specifically, the testbed is aimed to demonstrate distributed, modular control using a plasma control system framework being developed at the Plasma Science and Fusion Center at MIT. This framework integrates a real-time control framework, MARTe2, and a data management framework, MDSplus, to provide platform flexibility and robust data management for rapid prototyping of control systems. Both frameworks are widely used individually in fusion experiments worldwide. The magnetic levitation setup is centered around a single electromagnet coil which levitates a permanent disk magnet from above. Implemented with the integrated MARTe2/MDSplus framework, the controller, actuator, and sensors are distributed over the network. With the magnetic levitation testbed, this thesis achieves three objectives: 1. formulation of a physicsbased model of the system, 2. development of a controller in a modular, networked framework, and 3. training and implementation of learning-based methods within the framework. First, a state-space model for single-axis magnetic levitation is formulated based on theory and refined with magnetic field measurements. A feedback controller is then developed and implemented with MATLAB Simulink. Afterwards, a vision-based observer is developed to estimate position and tilt of the levitated magnet. Pose-image datasets are auto-labeled using fiducial markers and are used to train a convolutional neural network. Finally, the trained network will be applied in system identification of the final controlled system. Through the process of system development, this thesis proposes that the integrated MARTe2/MDSplus framework is robust in performing real-time control of a networked system, and its structural modularity is advantageous for developing and testing learning-based models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation</title>
<link href="https://hdl.handle.net/1721.1/163412" rel="alternate"/>
<author>
<name>Nieves, Charmaine</name>
</author>
<id>https://hdl.handle.net/1721.1/163412</id>
<updated>2025-10-30T03:23:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Fast Assay of Bacteria Cell Permeability for Genetic&#13;
Transformation
Nieves, Charmaine
Bacterial cell genetic engineering is fundamental for research aiming to learn more about bacterial species for a broad range of applications. One method of intracellular delivery of foreign DNA during the genetic engineering process is the use of electroporation to create pores along the bacterial cell membrane. Current methods for assessing pore formation do not directly measure cell permeabilization or enable same-day assessment. In this thesis, a novel fast-screening protocol combining SYTOX green, microfluidics, and fluorescence imaging is evaluated for its capability to assess multiple conditions for cell permeabilization within a single day. By imaging bulk suspensions of post-electroporated cells stained with intracellularly delivered SYTOX, multiple electroporation conditions can be rapidly screened for cell permeabilization. This fast-screening protocol utilizes standard microbiology equipment and low-cost microfluidic imaging chambers, lowering the barrier to adoption and significantly reducing experimental time compared to conventional protocols involving foreign DNA delivery. Importantly, by decoupling permeabilization assessment from foreign DNA uptake, this method isolates the effect of membrane permeabilization from confounding factors such as restriction-modification systems. As a result, it provides a more accurate qualitative and quantitative assessment of bacterial membrane disruption. This approach enables same-day evaluation of electroporation conditions regardless of bacterial growth rate, potentially accelerating the optimization process for intracellular delivery in gene editing applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes</title>
<link href="https://hdl.handle.net/1721.1/163411" rel="alternate"/>
<author>
<name>Chong, Jinger S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163411</id>
<updated>2025-10-30T03:23:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Probabilistic Human Arm Motion Prediction via Structured Multitask Variational Gaussian Processes
Chong, Jinger S.
Accurate human motion prediction with uncertainty estimation is essential for safe and efficient human-robot collaboration, where robots must anticipate and react to human movements in real-time. Existing methods either rely on sophisticated techniques that demand extensive training data and sacrifice interpretability, or use simpler approaches like conventional Gaussian Processes (GPs) that fall short in performance. To address this gap, we propose a novel structured multitask variational GP framework that explicitly incorporates joint dependencies to reflect human kinematics. We further enhance this framework by integrating angular velocity constraints, which improve the physical plausibility of predictions. The addition of constraints alone yields up to a 66% reduction in mean angle error (MAE) and an 84% improvement in the likelihood of predicting ground truth (NLL), outperforming standard GP baselines across a wide range of motion types and prediction horizons. Among model variants, our structured GP with constraints offers the best tradeoff—achieving MAE within 1.1–2.6% and NLL within 0.001–0.012 of the best-performing model, while maintaining significantly lower overconfidence rates (OCR), particularly at short horizons where the independent GP model OCR reaches nearly 45%. These results underscore the importance of incorporating structure and context in human motion prediction, demonstrating that even simpler probabilistic models like GPs can achieve substantial performance gains when augmented with such information.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics</title>
<link href="https://hdl.handle.net/1721.1/163410" rel="alternate"/>
<author>
<name>Roy, Ronak</name>
</author>
<id>https://hdl.handle.net/1721.1/163410</id>
<updated>2025-10-30T03:23:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Permanent Magnet Synchronous Motors: Nonlinear Dynamic Modeling, Hardware Characterization, and High-Bandwidth Torque Control for Applications in Dynamic Robotics
Roy, Ronak
The high-level control algorithms that are responsible for achieving dynamic locomotion in legged robots depend on accurate torque production for matching real-life performance with simulated performance. To achieve accurate torque production, actuators must run high-bandwidth, low-level torque control. Developing high performance low-level controllers requires accurate actuator models. This thesis covers the physical model of a Permanent Magnet Synchronous Motors (PMSM), a very common type of actuator in dynamic robotics. This thesis details the derivation of the PMSM linear model, how to adapt the model dependent on the physical construction of a real motor, and the implementation of FieldOriented Control (FOC) to achieve torque control. This thesis also describes a novel design of a high-precision dynamometer, which allows a motor to be coupled with an impedance and a torque sensor in order to accurately characterize the torque production characteristics of the motor. Using this dynamometer and other experimental setups, this thesis validates the model and determines parameters for multiple different actuators. Finally, this thesis proposes an augmented PMSM model that considers the nonlinear saturation behavior of the motor, validating the principle with hardware experiments, and demonstrates a nonlinear torque model and gain-scheduled current controller that improve torque tracking performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels</title>
<link href="https://hdl.handle.net/1721.1/163409" rel="alternate"/>
<author>
<name>Ilkbahar, Kayra B.</name>
</author>
<id>https://hdl.handle.net/1721.1/163409</id>
<updated>2025-10-30T03:23:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of an Apparatus and Testing Strategy for Characterizing Rolling Resistance of Omnidirectional Wheels
Ilkbahar, Kayra B.
Omnidirectional wheels (omni wheels) are a type of wheel technology similar to caster wheels but capable of simultaneous longitudinal and lateral motion, making them suitable for holonomic motion applications. In recent years, their popularity has grown substantially in areas such as educational robotics, autonomous vehicles, and industrial automation. Despite their similarity to caster wheels in both function and application, omni wheels are a much less mature technology and few agreed-upon standards exist for their design and testing. This thesis covers the design of a test procedure and its requisite test apparatus to characterize the rolling resistance of omni wheels across various test conditions, and focuses specifically on the mechanical and electrical design of an apparatus which can measure the rolling resistance coefficient of omni wheels while modulating their load weight, travel angle, and travel speed.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Structural Approach to Measuring Time-varying Risk&#13;
Aversion</title>
<link href="https://hdl.handle.net/1721.1/163345" rel="alternate"/>
<author>
<name>von Turkovich, Nick</name>
</author>
<id>https://hdl.handle.net/1721.1/163345</id>
<updated>2025-10-22T03:34:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Structural Approach to Measuring Time-varying Risk&#13;
Aversion
von Turkovich, Nick
Non-homothetic preferences have the potential to rationalize important asset pricing facts including time-varying risk premia and business cycle movements in asset prices (e.g., Campbell and Cochrane (1999)). This paper offers a structural approach to measuring time-varying risk aversion. Motivated by the literature on consumption commitments (e.g., Flavin and Nakagawa (2008), Chetty and Szeidl (2016), Chetty, Sandor, and Szeidl (2017)), I develop a model in which investors have nonseparable preferences over housing and nonhousing consumption, and investors must consume a minimum amount of housing each period. Non-housing consumption is assumed to be flexibly chosen. The key insight is that the intratemporal optimality condition between the two goods reveals information about the surplus consumption ratio, a key variable driving risk aversion. A cointegrating relationship between relative quantities and prices allow us to identify the elasticity of intratemporal substitution and measure surplus housing consumption. Using aggregate U.S. consumption data from 1959 to the present, the measured surplus consumption ratio demonstrates clear business cycle fluctuations, rising during expansions and falling during recessions. Consistent with the theory, this measure also predicts future excess returns.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities</title>
<link href="https://hdl.handle.net/1721.1/163344" rel="alternate"/>
<author>
<name>Epstein, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163344</id>
<updated>2025-10-22T03:34:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization of Gas Heating in Massachusetts: An Evaluation of Current Trends and Opportunities
Epstein, Andrew
The commonwealth of Massachusetts has ambitious decarbonization goals enshrined in law and has been establishing the regulations to achieve them. Through its Department of Public Utilities regulatory rulings, the state has required local gas and electric utilities to pursue decarbonization not only by reducing the emissions of their electric supply but also by actively supporting gas load reduction. The residential heating sector dominates this effort, with programs like MassSave incentivizing customer adoption and now MA DPU 20-80-B&#13;
requiring gas utilities to demonstrate that they have sufficiently evaluated the possibility of non-pipeline alternatives, including but not limited to electrifying customers instead of reinvesting in the gas system for all future gas investments.&#13;
&#13;
This paper looks at a single Massachusetts utility, National Grid, and evaluates where its customers are switching to electric heat and which mechanisms are driving current adoption. It further evaluates where geographically National Grid could invest in electrification instead of replacing gas investments under the new 20-80-B order. In doing so it establishes a model for cost benefit calculations related to prospective NPA projects. This paper then examines the degree to which ongoing electrification efforts are aligned with one another. Finally, this paper explores concerns that the process of electrification might be regressive, leaving behind those who cannot afford to electrify their systems and leaving them to pay ever-increasing prices as the full gas system is paid for through rates from a decreasing population of consumers. In evaluation of such concerns, it determines the geographic correlation between ongoing decarbonization efforts and communities already facing housing burden.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables</title>
<link href="https://hdl.handle.net/1721.1/163343" rel="alternate"/>
<author>
<name>Salata, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/163343</id>
<updated>2025-10-22T03:34:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Streamlining Diagnostics of Electrical-Connection-Related Errors in General Assembly Using Augmented Reality Wearables
Salata, Elizabeth
Electrical connection errors arise frequently during manufacturing. It is optimal to repair these errors during General Assembly Trim Line stations when the wiring harnesses are still exposed and easily accessible. However, the time required to locate the cause of the errors often exceeds Trim station cycle times, so most repairs are delayed until after General Assembly. Due to the implications of shutting down the line, this results in significantly higher repair times, scrap costs, and resources. To overcome these challenges, there is clear evidence supporting the use of Augmented Reality (AR) tools to innovate and streamline manufacturing processes. This master's thesis identified deficiencies in the current standard operating procedure for addressing errors and used a human-centered design approach to develop a novel error diagnostic process using an AR overlay technique to pin point on the vehicle where the problem lies. This thesis also conducted an experiment to assess the performance, success rate, and perceived cognitive load of the two processes. The data collected from the experiment provided sufficient evidence that the diagnostic process developed for this thesis reduces the elapsed time to locate the connection error by 75% with a statistically significant reduction in overall perceived cognitive load. The likelihood of widespread adoption of the AR overlay process was assessed from an estimate of further AR hardware development, safety considerations in automotive manufacturing environments, and the level of enthusiasm of all stakeholders who were consulted for this research project.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers</title>
<link href="https://hdl.handle.net/1721.1/163342" rel="alternate"/>
<author>
<name>Sirgo, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/163342</id>
<updated>2025-10-22T03:34:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Techno-Economic Assessment of Hybrid Renewable Energy and Battery Storage Systems for Data Centers
Sirgo, Alex
As the demand for data centers continues to grow, so does their energy consumption, making it increasingly important to develop sustainable and cost-effective strategies for powering them with carbon-free electricity. This thesis explores a techno-economic modeling framework that evaluates combinations of solar, wind, and battery energy storage systems to assess their ability to meet a data center’s electricity demand with on-site renewable generation. The model fills a gap in current literature by focusing on real-time energy matching using co-located infrastructure, rather than traditional off-site procurement methods like power purchase agreements and renewable energy credits.&#13;
&#13;
Using real-world weather and price data, the simulation calculates hourly generation, storage behavior, and grid interactions across a 20-year period. A financial model then calculates the levelized cost of energy (LCOE) for each system configuration. Results show that wind energy generally provides the lowest-cost renewable supply option, while hybrid solar and wind configurations improve renewable penetration. Battery storage plays a key role in shifting excess generation to periods of undersupply, but its economic viability depends on system sizing. Across different system configurations, renewable penetration ranged from 31.3% to 97.8%, while LCOE varied from $27.5/MWh to over $100/MWh, illustrating the trade-offs between cost and grid independence.&#13;
&#13;
By providing a structured analysis of the trade-offs between renewable penetration and cost, this research offers insight into how data centers and other energy-intensive facilities can design dedicated carbon-free energy systems. The findings underscore the importance of balancing resource diversity and storage investment to achieve decarbonization goals while maintaining economic viability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diagnostics in Additive Manufacturing Using Image-Based Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/163341" rel="alternate"/>
<author>
<name>Varma, Arun Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/163341</id>
<updated>2025-10-22T03:34:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Diagnostics in Additive Manufacturing Using Image-Based Machine Learning
Varma, Arun Alejandro
Additive Manufacturing (AM) is a vital capability in the aerospace industry. Blue Origin manufactures a substantial share of engine parts via metal AM. To meet growing customer demand, the company must dramatically increase engine throughput and, thus, 3D prints. Blue Origin has identified non-destructive testing (NDT) – particularly, Computed Tomography (CT) scanning – as an unsustainable bottleneck to expanding AM capacity. Not only is this process expensive, but, critically, there are not enough aerospace-grade CT machines in the world to support projected throughput. Without process change, meeting customer demand will soon become impossible. Yet, these scans provide important quality control, and any reduction in NDT must be accompanied by assurances of engine part integrity. This thesis introduces a diagnostic system that safely alleviates the bottleneck, and further yields insights that end-stage NDT alone cannot provide. The proposal is a machine learning system that evaluates the manufacturing process itself, examining layer-by-layer photographs captured during printing. It is predicated on two hypotheses: (1) These images, considered together, provide a synthetic 3D illustration of the build process; and (2) Machines can be taught to assess these process signatures dependably. The resulting system provides rich diagnostics. It achieves near-perfect anomaly recognition – 100% when using conservative defect thresholds. Operationally, the system can (at minimum) safely enable a 37-54% reduction in NDT, translating to millions of dollars in annual cost savings. In practice, this reduction will likely be higher. The system further enables early process intervention and a more data-driven approach to manufacturing intelligence. This work turns what began as an unsustainable bottleneck into an opportunity for enhanced quality control, process intelligence, and long-term manufacturing resilience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Substitution among Social Media Platforms: Evidence from App Tracking Panel Data</title>
<link href="https://hdl.handle.net/1721.1/163340" rel="alternate"/>
<author>
<name>Lagutina, Rina</name>
</author>
<id>https://hdl.handle.net/1721.1/163340</id>
<updated>2025-10-22T03:34:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Substitution among Social Media Platforms: Evidence from App Tracking Panel Data
Lagutina, Rina
This thesis explores a novel approach to competitive intelligence in the social media ecosystem by leveraging external mobile panel data to study substitution dynamics. It focuses on contextspecific behavioral patterns to identify which platforms compete for user attention in given situations. Using mobile app session data from April 2023 for approximately 5,000 users, the analysis segments usage into three behavioral contexts – morning, evening, and at-home sessions – and characterizes user-app interactions through descriptive statistics. K-means clustering is applied to identify archetypes of usage behavior across these contexts, revealing distinct patterns such as quick-check habits, deep content consumption, and intensive texting. By comparing app usage profiles across contexts, the study uncovers shifts in how and when platforms are used, highlighting subtle substitution dynamics. To validate the findings, the study analyzes app usage during service outages, testing if potential substitutes see increased engagement when a competing platform is unavailable. These insights offer a richer, contextaware framework for product managers to uncover indirect competition and tailor platform strategies to specific user behaviors. Limitations include reliance on behavioral data without content-level detail, mobile-only focus, and demographic skew in the panel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh</title>
<link href="https://hdl.handle.net/1721.1/163339" rel="alternate"/>
<author>
<name>Bari, Md Mustabeen Ul</name>
</author>
<id>https://hdl.handle.net/1721.1/163339</id>
<updated>2025-10-22T03:34:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Harnessing Generative AI in Developing Economies: A Systems Framework for Policy Design in Bangladesh
Bari, Md Mustabeen Ul
This thesis develops a systems-based policy framework for Generative Artificial Intelligence (GenAI) implementation in developing economies, with specific application to Bangladesh. While GenAI's potential productivity and labor market impacts are well-studied in developed economies, limited research addresses the challenges faced by developing countries positioned primarily as technology consumers rather than producers. The research employs causal loop diagramming to map interactions between five critical policy domains: human capital development, digital infrastructure, data sovereignty, sectoral stimulus, and governance.&#13;
&#13;
The resulting framework identifies four primary reinforcing mechanisms that can accelerate adoption and three balancing mechanisms related to labor displacement. To validate the framework, the research analyzes contrasting implementation approaches from India and Egypt, demonstrating the importance of cross-domain synergies in effective policy design.&#13;
&#13;
Applied to Bangladesh, the framework yields a dual-entry strategy focusing on healthcare and education sectors as initial implementation domains, leveraging the country's strategic advantages while addressing resource constraints through a consortia-based implementation model that creates institutional resilience. The thesis contributes both a reusable conceptual toolkit for analyzing GenAI policy in resource-constrained settings and an initial context-anchored roadmap for Bangladesh. Future research should refine the framework through longitudinal case studies while developing more detailed, stakeholder-engaged implementation plans for Bangladesh that include concrete budget allocations, institutional responsibilities, and measurable outcomes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Value of Digitizing Manufacturing Environments</title>
<link href="https://hdl.handle.net/1721.1/163338" rel="alternate"/>
<author>
<name>Briggi, Conor S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163338</id>
<updated>2025-10-22T03:34:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Value of Digitizing Manufacturing Environments
Briggi, Conor S.
There is significant variability and dispute around the value of digitally transformed manufacturing environments and no single methodology is broadly accepted. The variability stems from time-dependencies, implementation effectiveness, and the dynamic environments digital solutions are deployed in. However, an accurate accounting of this value is essential to company strategic planning. The research outlines how to approach this variability, cost parameters to consider, primary sources of value generation, and best practices for implementing Smart Factories. A tool that addresses these issues was successfully developed and deployed at Stanley Black &amp; Decker, helping the company to assess performance of the digitization efforts and tailor the delivered solution to optimize manufacturing performance. Results from this tool showed a positive expected return on investment and are provided to contextualize efforts in similar areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance</title>
<link href="https://hdl.handle.net/1721.1/163337" rel="alternate"/>
<author>
<name>Lorente Anon, Carla</name>
</author>
<id>https://hdl.handle.net/1721.1/163337</id>
<updated>2025-10-22T03:34:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multimodal Generative AI Chatbot for Root Cause Diagnosis in Predictive Maintenance
Lorente Anon, Carla
Predictive maintenance plays a critical role in industrial operations by enabling organizations to detect potential equipment failures before they occur. However, while sensor data can identify anomalies such as excessive vibration or temperature fluctuations, technicians often struggle to efficiently diagnose and resolve the root causes of these alarms. This research presents a generative AI-powered chatbot designed to enhance the root cause diagnosis process in predictive maintenance by leveraging multimodal retrieval-augmented generation (RAG) and advanced AI-driven troubleshooting capabilities.&#13;
&#13;
The chatbot integrates multiple functionalities to support maintenance teams in resolving alarms quickly and accurately. Its time series analysis module processes real-time sensor data, identifying abnormal patterns and guiding users through a structured troubleshooting workflow. The retrieval-augmented generation (RAG) engine allows the chatbot to retrieve and synthesize relevant troubleshooting information from technical manuals, historical maintenance records, and structured knowledge bases, ensuring that technicians receive precise, grounded outputs. Additionally, the chatbot supports multimodal interactions, enabling users to upload images, audio, and video for more comprehensive diagnostics. By analyzing uploaded images of damaged components, transcribing spoken maintenance reports, and processing video footage of equipment malfunctions, the chatbot enhances problem identification and resolution.&#13;
&#13;
Another key feature of the chatbot is its interactive guided conversation system, which enables multi-turn dialogues that refine diagnostics dynamically based on technician input. Instead of providing static troubleshooting steps, the chatbot continuously adapts its responses to ensure that users receive the most relevant recommendations as the diagnostic process unfolds. To maintain safety and reliability, the system incorporates AI guardrails, filtering inappropriate or irrelevant inputs while ensuring that generated responses align with best practices for industrial maintenance.&#13;
&#13;
An evaluation framework is proposed to assess the chatbot’s effectiveness, focusing on retrieval accuracy, response relevance, and diagnostic efficiency. Initial results demonstrate approximately 30% reduction in diagnostic time, highlighting the chatbot’s potential to improve maintenance workflows, reduce downtime, and enhance technician productivity. This research underscores the transformative role of multimodal generative AI in predictive maintenance and lays the foundation for broader industrial applications. As a result of this work, a patent has been filed to protect the novel architecture and methods developed. Future work could focus on expanding retrieval capabilities to include video, integrating intelligent task automation for dynamic work order generation, and refining alarm prioritization using adaptive risk-based assessments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intangible Investments and the Accrual-Cash Flow Relationship</title>
<link href="https://hdl.handle.net/1721.1/163332" rel="alternate"/>
<author>
<name>Soares, Fabio</name>
</author>
<id>https://hdl.handle.net/1721.1/163332</id>
<updated>2025-10-22T03:34:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Intangible Investments and the Accrual-Cash Flow Relationship
Soares, Fabio
This paper investigates whether the weakening negative relationship between accruals and operating cash flows can be attributed to the immediate expensing of intangible investments under current accounting standards. Building on the framework proposed by Green et al. (2022), I examine how the mechanical capitalization of intangible investments affects the accrual-cash flow relationship across firms with varying R&amp;D intensities. I show that the capitalization impacts the relationship in unexpected ways, indicating that the proposed rationale cannot fully explain the observed trend. I further exploit differences in accounting treatments under IFRS and US GAAP to test whether increased capitalization of intangible investments through development costs strengthens the relationship. I find that the relationship is significantly more negative under IFRS than US GAAP, independently of R&amp;D expenditure, suggesting that increased capitalization alone does not explain the differences. Additionally, the positive trend observed for high R&amp;D firms in both standards highlights that increased capitalization is insufficient to reverse the weakening trend. These results challenge the view that current accounting practices are the primary cause of the weakening accrual-cash flow relationship and underscore the need for further exploration of alternative explanations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits</title>
<link href="https://hdl.handle.net/1721.1/163331" rel="alternate"/>
<author>
<name>Zeng, Arnaud</name>
</author>
<id>https://hdl.handle.net/1721.1/163331</id>
<updated>2025-10-22T03:34:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Winning Over Gen Z: The Evolving Strategies of Sports Leagues&#13;
and Media in Response to Changing Youth Habits
Zeng, Arnaud
This thesis examines how sports leagues and media companies are evolving to better connect with Generation Z, a generation whose changing expectations and habits – on-demand and socially driven – are reshaping the landscape of sports consumption. With fewer Gen Z fans watching full games on traditional mediums, the industry is being pushed to rethink its approach, adapting not just how content is delivered, but also what kind of content is created. Through a combination of expert interviews and industry data, this paper looks at the rise of short-form content, the importance of digital-first platforms, and the growing influence of storytelling&#13;
through influencers or behind the scenes. It also explores how new competition formats are exploiting what it now means to be a fan. The goal is to understand how the sports ecosystem is adjusting to remain relevant to its youngest audience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation</title>
<link href="https://hdl.handle.net/1721.1/163330" rel="alternate"/>
<author>
<name>Xi, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/163330</id>
<updated>2025-10-22T03:34:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metal Additive Manufacturing Capabilities for Footwear Prototyping and Product Creation
Xi, Tiffany
In the footwear industry, the speed in which footwear designs reach the market impacts the ability for a company to accurately meet the demands of its customers as the probability of consumer preferences changing increases with time. This research investigates the impact of incorporating metal additive manufacturing capabilities into the product creation process of a major athletic footwear company. The study aims to determine whether and under which applications metal additive manufacturing can increase the speed at which footwear designs reach the market, while maintaining or improving the desired product quality.&#13;
    A case study approach was employed, focusing on the development of rubber outsole molds using metal additive manufacturing technology. The study compared two process flows that excluded and included metal additive manufacturing. The case study evaluated these processes based on the speed of the development process and the quality of the produced footwear samples. The footwear sample quality was measured against production-equivalent samples obtained from the company’s manufacturing partner. The results demonstrated that incorporating metal additive manufacturing capabilities led to a reduction in the time required for mold design and fabrication. This speed advantage was primarily attributed to the ability to directly fabricate detailed textures into the mold, eliminating the need for outsourced etching processes.&#13;
    The visual quality of samples produced did not fully match those created by the company's manufacturing partners but were sufficient for initial sample development. Importantly, the traction properties were comparable to those of the manufacturing partner's samples, indicating that the functional quality of the samples is adequate for product development purposes.&#13;
This research provides valuable insights into the potential of metal additive manufacturing in accelerating footwear product development. Future work recommendations include exploring advanced modeling and design software and examining the impact of machine parameters on build quality. The findings of this study have implications for both the footwear industry and other sectors considering the integration of metal additive manufacturing technologies into their product development processes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach</title>
<link href="https://hdl.handle.net/1721.1/163329" rel="alternate"/>
<author>
<name>Zhang, Yu (Sherry)</name>
</author>
<id>https://hdl.handle.net/1721.1/163329</id>
<updated>2025-10-22T03:34:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating Impact Investing through a Systems Thinking Lens:&#13;
Hallmarks of a Transformational Approach
Zhang, Yu (Sherry)
As impact investing increasingly aspires to drive systemic change, the question of how to evaluate such efforts remains underexplored. Traditional evaluation approaches often grounded in linear causality and program-level outputs, and struggled to capture the complexity, interdependence, and emergent nature of systemic transformation. This thesis investigates how systemic investing can be evaluated by integrating systems thinking, evaluation theory, and investing practice. It develops a conceptual framework of thirteen hallmarks that characterize systemic investing evaluation across dimensions such as time horizons, stakeholder engagement, cross-sector collaboration, and capital dynamics. Drawing on 46 real-world cases, the research identifies 112 indicators to make these hallmarks observable and assessable in practice. To support practical application, the thesis also introduces an AI-assisted scoring tool that automates the evaluation of narrative content using the framework. Together, these contributions aim to support more reflective, adaptive, and system-aware evaluation practices in the emerging field of systemic investing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow</title>
<link href="https://hdl.handle.net/1721.1/163328" rel="alternate"/>
<author>
<name>Sen, Shweta</name>
</author>
<id>https://hdl.handle.net/1721.1/163328</id>
<updated>2025-10-22T03:34:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Optimization of Container Load Plans for Modulating Inventory Flow
Sen, Shweta
Conventional strategies for container load planning (CLP) predominantly emphasize maximizing container utilization, which can result in suboptimally-timed inventory arrival, increased inventory holding costs, and downstream operational inefficiencies. Using a real-world case study from a global footwear and apparel retailer, this research formulates a novel multi-objective mixed-integer linear programming (MOMILP) model that jointly considers container utilization, transportation and storage costs, and timing accuracy of inventory delivery. The proposed model utilizes a branch-and-bound algorithm to evaluate numerous load configurations, assessing the impact of different load rules and weighting parameters on transportation performance metrics and inventory flow. Results highlight the cruciality of prioritizing delivery precision in transportation management decisions, demonstrating that solely maximizing volume utilization can adversely affect overall cost efficiency when downstream inventory storage and operational requirements are considered. This work also provides a process map of load planning activities and identifies targeted operational improvements, such as consolidation bypass and purchase order (PO) partitioning, that can enhance inventory flow smoothness, reduce transportation costs, and support more responsive logistics networks. Collectively, this work extends existing CLP methodologies by incorporating delivery timing and inventory storage considerations into load planning decisions, offering practical enhancements for logistics optimization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Principles and Practices of Gap-Closing Investing</title>
<link href="https://hdl.handle.net/1721.1/163327" rel="alternate"/>
<author>
<name>Kapor, Mitchell</name>
</author>
<id>https://hdl.handle.net/1721.1/163327</id>
<updated>2025-10-22T03:34:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Principles and Practices of Gap-Closing Investing
Kapor, Mitchell
This thesis examines the principles and practices of gap-closing investing, a distinctive model of early-stage venture capital investing that seeks to close gaps in access, opportunity, and outcomes for low-income communities and communities of color. Developed by Dr. Freada Kapor Klein and Mitchell Kapor through Kapor Capital, gap-closing investing integrates social impact objectives with a performance-driven investment strategy. The thesis combines historical analysis of socially responsible investing and impact investing with case studies of venturebacked startups to situate gap-closing investing within a broader tradition of values-based finance. It traces the ethical roots of impact investing to religious traditions, the emergence of socially responsible investing funds in the 1970s, and the formalization of impact investing terminology in the late 2000s. Gap-closing investing is distinguished by a developmental approach to startup growth, a redefinition of founder selection criteria emphasizing “distance traveled” over pedigree, and a focus on mitigating structural barriers through capital allocation. The thesis critically compares gap-closing investing to Corporate Social Responsibility (CSR) and Environmental, Social, and Governance (ESG) frameworks, arguing that gap-closing uniquely centers systemic impact as a core investment goal rather than a secondary consideration. The findings challenge the perception that impact investing is inherently concessionary, using performance data from Kapor Capital’s portfolio to demonstrate that intentional, equity-focused investing can produce both superior financial returns and measurable social outcomes. Gap-closing investing is presented as both a pragmatic investment strategy and a model for using venture capital to drive systemic change toward a more inclusive economy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Model for Battery State of Health</title>
<link href="https://hdl.handle.net/1721.1/163326" rel="alternate"/>
<author>
<name>Garza Lozano, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/163326</id>
<updated>2025-10-22T03:34:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predictive Model for Battery State of Health
Garza Lozano, Catalina
As battery energy storage systems (BESS) become critical components of grid infrastructure, accurately assessing their State of Health (SoH) is essential for optimizing performance, reducing costs, and ensuring contractual compliance. This thesis investigates the development of accurate, real-time SoH estimation models for utility-scale battery storage sites operated by NextEra Energy. Current SoH measurements—derived from annual capacity tests and Battery Management System (BMS) data—are often inaccurate or infrequent, leading to either over- or under-augmentation and resulting in financial inefficiencies. &#13;
&#13;
To address this gap, four state estimation models were developed and evaluated: an Unscented Kalman Filter (UKF), a Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN), a multitask RNN, and a Delayed Reinforcement Learning (DRL) model. Each model uses operational data—such as voltage, current, temperature, and State of&#13;
Charge (SoC)—to estimate degradation patterns and predict SoH at the rack, lineup, and site levels. Their outputs were compared against ground-truth capacity test results from a large-scale battery storage site.&#13;
&#13;
The DRL model demonstrated the highest accuracy, achieving a deviation of only 1.6 months compared to capacity test data, significantly outperforming existing BMS readings and the other three models. These findings underscore the value of advanced machine learning techniques in enabling proactive maintenance, optimized augmentation scheduling, and cost-efficient storage site management. This research offers a scalable framework for real-time SoH estimation across large fleets of battery storage assets and contributes to the broader goal of improving grid reliability through smarter energy storage management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems</title>
<link href="https://hdl.handle.net/1721.1/163325" rel="alternate"/>
<author>
<name>Sowards, Steffan</name>
</author>
<id>https://hdl.handle.net/1721.1/163325</id>
<updated>2025-10-22T03:34:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Key Performance Indicator Modeling for Robotic Mobile Fulfillment Systems
Sowards, Steffan
This work presents a study on the development and application of data-driven operational efficiency and throughput Key Performance Indicator (KPI) modeling for Robotic Mobile Fulfillment Systems (RMFS). Through rigorous analysis of extensive operational data from an operating RMFS, we demonstrate the efficacy of machine learning approaches in predicting and optimizing the performance of complex warehouse automation systems. The research employs advanced techniques, including gradient boosted bagged tree ensembles and AutoML, to capture complex input interactions and provide parallel predictions across multiple KPIs. Our models achieve a mean R² value of 0.7838 across all templates and KPIs, with particularly strong performance in our top performing metric across templates (mean R² of 0.9660).&#13;
&#13;
The study introduces a novel framework for feature engineering and selection, emphasizing actionable inputs while excluding intermediate variables to enhance model interpretability and practical utility. We validate our approach against novel operating conditions, demonstrating the models’ ability to generalize to unseen scenarios. Interpretability techniques, including SHAP analysis and permutation feature importance, provide valuable insights into system behavior and key performance drivers.&#13;
&#13;
This research establishes a generalizable framework for leveraging data-driven modeling in predicting and optimizing brownfield warehouse automation system behavior. The developed approach offers significant potential for enhancing operational decision-making, system design, and strategic planning in the rapidly evolving field of e-commerce fulfillment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Approach to Component Code Optimization for Wound Closure Portfolio</title>
<link href="https://hdl.handle.net/1721.1/163322" rel="alternate"/>
<author>
<name>Dubelier, Madeline</name>
</author>
<id>https://hdl.handle.net/1721.1/163322</id>
<updated>2025-10-22T03:34:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Systems Approach to Component Code Optimization for Wound Closure Portfolio
Dubelier, Madeline
Product portfolio management involves strategically analyzing, optimizing, and expanding a company’s offerings to maximize value and align with business goals. While companies often focus on portfolio expansion to meet evolving customer needs and gain market share, product deletion is frequently overlooked, leading to code proliferation and undermining operational efficiency. Effective variety management often requires input from stakeholders across the supply chain, yet few published methods take this approach. This work presents a systematic supply chain management approach to portfolio optimization using a case study from Johnson &amp; Johnson MedTech. The case study is on pledgets, key components in non-absorbable suture systems. Recent pledget product quality issues exposed the need for a systematic approach to reducing component variety and operational efficiency. A current-state analysis addressed multiple dimensions of complexity. The evaluation combined qualitative and quantitative data and led to a five-stage optimization strategy. The proposed future state portfolio reduces component variety by 60%, guided by three constraints: continue to meet customer needs, protect competitiveness, and reduce manufacturing complexity. This method provides a replicable model for rationalizing legacy portfolios in the medical device industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Optimization-Based Approach to Efficient Clearance Inventory Allocation</title>
<link href="https://hdl.handle.net/1721.1/163320" rel="alternate"/>
<author>
<name>Perez Munoz, Karla Mayra</name>
</author>
<id>https://hdl.handle.net/1721.1/163320</id>
<updated>2025-10-22T03:34:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Optimization-Based Approach to Efficient Clearance Inventory Allocation
Perez Munoz, Karla Mayra
Allocating clearance inventory effectively remains a critical challenge in retail environments characterized by short decision cycles, fluctuating demand, and operational constraints. Decisions made during the clearance period are particularly impactful, as they determine the final opportunity to recover value from unsold products before they lose relevance or perishability. This thesis presents a mathematical optimization model designed to support the redistribution of discounted articles across a network of stores, with the objective of maximizing revenue while satisfying constraints related to stock availability, store capacity, and observed demand at the article-size level.Developed in collaboration with a leading global fashion retail company, the model was built to align with existing business processes and balances analytical rigor with simplicity in implementation. The model incorporates business-defined parameters and is tested using real operational data from selected distribution centers. It demonstrates significant improvements over the current practice of single-item allocation and addresses the computational challenges posed by the high dimensionality of real-world retail problems. By implementing efficient iterative procedures and demand-scaling mechanisms, the model ensures tractability while capturing the complexity of the business environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gas Network Preparations for Networked Geothermal</title>
<link href="https://hdl.handle.net/1721.1/163319" rel="alternate"/>
<author>
<name>Serbent, M. Patrick</name>
</author>
<id>https://hdl.handle.net/1721.1/163319</id>
<updated>2025-10-22T03:34:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Gas Network Preparations for Networked Geothermal
Serbent, M. Patrick
As Massachusetts pursues its goal of achieving net-zero carbon emissions by 2050, the transition from natural gas to sustainable thermal energy solutions presents both opportunities and challenges for its 1.6 million natural gas customers. This thesis investigates the potential of networked geothermal systems as a viable alternative to traditional natural gas infrastructure, with a focus on leveraging existing gas network replacement programs, such as the Gas System Enhancement Plan (GSEP), to facilitate this shift. A four-phase methodology —encompassing site selection, model development, cost analysis, and business case formulation— evaluates the feasibility of integrating high-density polyethylene (HDPE) piping into leak-prone pipe replacement efforts as a preparatory step for future geothermal or hydrogen applications. Findings suggest that HDPE offers potential material and inventory cost advantages over medium-density polyethylene (MDPE), with added flexibility for low-carbon conversions, though significant upfront costs and regulatory uncertainties remain barriers. An example site already scheduled for main replacement work showed a 6% total increase in cost for the project based on the change in pipe from MDPE to HDPE. This work underscores the potential of aligning infrastructure modernization with climate goals, offering a framework for utilities like National Grid to navigate the energy transition in cold, densely populated regions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology</title>
<link href="https://hdl.handle.net/1721.1/163318" rel="alternate"/>
<author>
<name>Siddiqui, Sameed Muneeb</name>
</author>
<id>https://hdl.handle.net/1721.1/163318</id>
<updated>2025-10-22T03:34:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning and Biosecurity in the Age of Pandemics: Advancing Biological Research and Safeguarding Synthetic Biology
Siddiqui, Sameed Muneeb
This thesis explores the dual imperatives of enhancing biosecurity and accelerating outbreak response. The research addresses two key areas. First, the thesis analyzes the implications of a national nucleic acid synthesis screening framework on outbreak response agility. A first-hand perspective is provided, identifying potential bottlenecks stemming from lagging customer verification and sequence screening approaches. Concrete solutions, such as pre-verification of first responders, priority processing channels, pre-approval of standard countermeasure sequences, and optimized computational screening, are proposed to mitigate these challenges and ensure rapid response capabilities without compromising biosecurity. Second, a machine learning architecture for biological sequence modeling, “Lyra” is presented. Lyra is grounded in the biological principle of epistasis and leverages state space models (SSMs) combined with projected gated convolutions to efficiently capture both local and long-range sequence interactions. We demonstrate new mathematical theory to connect SSMs with the approximation of polynomial functions - key to predicting epistatic effects. This subquadratic architecture achieves state-of-the-art performance on diverse biological tasks, including protein fitness landscape prediction, RNA function prediction, and CRISPR guide design, while utilizing substantially fewer parameters and computational resources than existing foundation models like transformers. The thesis concludes by highlighting the synergistic potential of advanced machine learning and thoughtful policy to significantly improve pandemic preparedness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In-or-Out: Creators’ Odyssey for Success</title>
<link href="https://hdl.handle.net/1721.1/163317" rel="alternate"/>
<author>
<name>Li, Zelin</name>
</author>
<id>https://hdl.handle.net/1721.1/163317</id>
<updated>2025-10-22T03:34:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">In-or-Out: Creators’ Odyssey for Success
Li, Zelin
The creator economy is flourishing, driven by shifts in advertising budgets and a surge in the supply of content creators. This has introduced a new challenge for firms: identifying which early-stage creators will grow to become stars. By identifying future stars, firms can choose who to invest their scarce resources in. They may also be able to purchase effective influence at a (proportionately) lower price than what they will pay once a creator becomes a star. Past research has shown that predicting which content will become viral is challenging. Instead, we focus on using content to predict which early-stage creators will grow their follower bases. We measure both the positioning of a creator’s early content and how the creator adjusts this positioning. We find that the initial position is not predictive of future success. However, subsequent adjustments in position are predictive, particularly if the creator’s initial follower base has grown consistently, rather than over a short period of rapid (viral) growth. Our insights inform the construction of predictive models that outperform baseline models in out-of-sample predictions of which creators will grow their followers the fastest.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining</title>
<link href="https://hdl.handle.net/1721.1/163316" rel="alternate"/>
<author>
<name>Moscoso Restovic, Rodrigo Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/163316</id>
<updated>2025-10-22T03:34:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Cooperation in Water Management: A Game-Theoretic&#13;
Approach to Sustainable Infrastructure in Chilean Mining
Moscoso Restovic, Rodrigo Y.
Through a game-theoretic methodology this thesis examines collaborative approaches to managing water infrastructure within Chilean mining operations. The research examines cooperative stakeholder interactions to tackle water scarcity and growing demand in Chile's mining industry among mining firms, local residents and regulatory bodies. It utilizes game theory with a focus on cooperative games and bargaining models to develop a structured analytical framework for analyzing stakeholder dynamics including their incentives and cooperative opportunities.&#13;
The thesis centers on creating a mathematical model that shows stakeholders as rational entities who seek to maximize their benefits while facing resource constraints and regulatory limitations. The implementation of cooperative game theory allows for detailed examination of coalition building processes along with resource sharing agreements and benefit allocation practices which helps to define stable cooperative possibilities.&#13;
The primary findings show that mining companies achieve greater efficiency gains through water infrastructure collaboration than through separate individual investments. This thesis presents quantitative evidence that partnerships among mining projects generate significant financial savings and lead to better resource usage and positive environmental and social results.&#13;
Sensitivity analyses identify that cooperative stability depends on several critical factors, including the asymmetries existing in the different mining projects, the sequence in which investment decisions are made, and the transfer price for water selling for those projects that prefer free rides. The final part of the thesis presents concrete suggestions for policymakers and industry leaders to develop cooperative frameworks through specific policy mechanisms and incentive systems that support long-term collaboration.&#13;
The study advances existing academic knowledge by utilizing detailed game-theoretic approaches to address practical problems in sustainable mining practices. The findings reveal that strategic partnerships serve as fundamental tools for managing resources which can effectively tackle the urgent water scarcity challenges Chile faces.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving Manufacturing Best Practices Using Multimodal AI</title>
<link href="https://hdl.handle.net/1721.1/163315" rel="alternate"/>
<author>
<name>Zachary, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/163315</id>
<updated>2025-10-22T03:34:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Driving Manufacturing Best Practices Using Multimodal AI
Zachary, Mark
Multimodal artificial intelligence offers promising solutions for enhancing operational excellence in contract manufacturing, where small job shops typically operate with limited standardization and high process variability. This research develops a part similarity tool that integrates geometric, material, and scale information to improve quoting accuracy and engineering efficiency in high-mix, low-volume production environments. After examining the fragmented manufacturing landscape and reviewing current AI applications in manufacturing, the study introduces an approach based on Variational Autoencoders for encoding 3D geometry alongside material properties and dimensional scale information. The technical implementation addresses challenges of multimodal fusion, missing data handling, and computational efficiency, while a qualitative ablation study demonstrates how this comprehensive approach outperforms single-modal methods in manufacturing relevance. Engineers benefit from improved insights for manufacturing planning, while estimators achieve more consistent cost predictions using the multimodal system. Reinforcement learning with human feedback provides a mechanism for continuous refinement, creating a framework that bridges geometric similarity with manufacturing context and reduces subjectivity in critical business processes. The research contributes both theoretical insights into multimodal learning and practical implementation strategies for standardizing operations in contract manufacturing environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions</title>
<link href="https://hdl.handle.net/1721.1/163314" rel="alternate"/>
<author>
<name>Zeng, Bob</name>
</author>
<id>https://hdl.handle.net/1721.1/163314</id>
<updated>2025-10-22T03:34:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Made in Mexico: How Chinese Firms Navigate Nearshoring Amid Global Trade Disruptions
Zeng, Bob
This research explores the surge of Chinese manufacturing investments in Mexico as a strategic adaptation to recent global trade disruptions, specifically the U.S.–China trade tensions and the COVID-19 pandemic. By analyzing Chinese firms' motivations and strategies, the study highlights how they leverage Mexico’s strategic geographic proximity, favorable trade conditions under the USMCA, competitive labor market, and established industrial infrastructure to secure continued access to the North American market while minimizing tariff impacts and supply chain risks. Sector-specific analyses of the automotive, electronics, and renewable energy industries reveal distinct operational, regulatory, and cultural challenges encountered by these companies during their transition to Mexican production facilities. In addressing these challenges, Chinese firms have adopted strategies such as supply chain localization, rigorous adherence to North American regulatory frameworks, and effective cross-cultural management practices. Furthermore, the analysis situates this trend within the broader geopolitical context, emphasizing the role of evolving U.S. trade policies and proactive Mexican industrial initiatives in shaping the nearshoring landscape. The findings suggest that while Chinese investment in Mexico presents significant opportunities for industrial upgrading and enhanced bilateral cooperation, the longevity and effectiveness of these ventures depend on firms' strategic flexibility, deeper integration into local economies, and adept management of complex geopolitical and regulatory environments. By evaluating these elements, the research provides valuable insights into the drivers behind the increased Chinese presence in Mexico and the broader implications for global trade patterns, supply chain resilience, and regional economic integration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets</title>
<link href="https://hdl.handle.net/1721.1/163313" rel="alternate"/>
<author>
<name>Zhu, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/163313</id>
<updated>2025-10-22T03:34:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bridging the Gap: Strategies and Roles of Chinese Fintech Entrepreneurs and Startups in Sub-Sahara African Markets
Zhu, Yuan
This thesis examines the strategies and operational practices of Chinese fintech entrepreneurs in sub-Saharan African markets, with a focus on how they navigate regulatory fragmentation, localize business models, and build trust in low-infrastructure environments. Drawing on fieldwork and semi-structured interviews with founders, executives, and product leads from fifteen China-linked fintech firms across Nigeria, Kenya, and Francophone Africa, the study investigates how these actors engage with underdeveloped financial systems while adapting knowledge and models from China’s digital finance ecosystem. The research identifies several distinct approaches to market entry and adaptation, including platform integration, compliance-focused positioning, and informal ecosystem engagement. Findings suggest that these ventures do not simply export Chinese models but instead reconfigure them in response to local constraints in regulation, consumer trust, and institutional capacity. By analyzing firm-level strategies in diverse regulatory and market settings, this study contributes to broader discussions on transnational entrepreneurship, financial infrastructure development, and the evolving role of private actors in advancing digital inclusion across emerging economies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop</title>
<link href="https://hdl.handle.net/1721.1/163312" rel="alternate"/>
<author>
<name>Carson, Alix</name>
</author>
<id>https://hdl.handle.net/1721.1/163312</id>
<updated>2025-10-22T03:34:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Data-Driven Work Center Assignment and Pricing Strategy for a Job Shop
Carson, Alix
Job shops with semi-autonomous work centers must understand their capacity utilization and financial state to maximize efficiency and profitability. Machine monitoring software allows managers to see the state of machines at any time and capture real-time capacity utilization. Job shops are positioned to maximize these work centers and must connect their manufacturing and operations strategy to the real-time shop data to maximize efficiency. This research is a case study in how a job shop can create a right-to-win strategy targeting jobs that are compatible and profitable for semiautonomous machines.&#13;
&#13;
ADDMAN Precision Baltimore (APBAL), a precision machine shop in the aerospace and defense industry, is facing labor constraints and underutilized work centers. This research aims to develop a structured quoting strategy and strategic pricing model to optimize job allocation between APBAL’s two semi-autonomous machining centers: the Makino Machining Complex 2 (MMC) and the Fanuc Robodrill. By integrating qualitative observations, historical job data, and machine utilization metrics, this study identifies inefficiencies in current job assignment practices. Key findings indicate that aligning work center assignments with projected profitability and capacity utilization can improve overall efficiency. A decision-making framework and pricing matrix are proposed to enhance job quoting accuracy, optimize machine usage, and increase APBAL’s competitiveness in securing high-volume contracts. The results offer a scalable framework for APBAL and its parent company, ADDMAN Engineering, to deploy across other machining facilities, ultimately improving operational performance and financial outcomes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Technoeconomic Model for Maritime Applications of Green Power Technologies</title>
<link href="https://hdl.handle.net/1721.1/163311" rel="alternate"/>
<author>
<name>Tuana, Daniel I. S.</name>
</author>
<id>https://hdl.handle.net/1721.1/163311</id>
<updated>2025-10-22T03:34:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Technoeconomic Model for Maritime Applications of Green Power Technologies
Tuana, Daniel I. S.
Growing societal and regulatory pressures are causing industries around the world to consider greener alternatives to conventional fossil fuel power technologies. As a result, power solution suppliers like CAT are facing strategic uncertainties:  if, where, and when their core product markets will be disrupted by the novel adoption of alternative technologies. With the intention of helping to inform CAT’s future product and service strategy in conjunction with previous research related to powering mines and data centers, this thesis outlines the development of a code to estimate and compare the total cost of ownership of battery, hydrogen fuel cell, and nuclear power technologies to incumbent fossil fuel-driven systems in a variety of maritime scenarios including serving shoreside port electricity demand and on-water power demand across a diverse set of vessel segments. &#13;
The code leverages first principles, empirical models, and researched assumptions to model the performance and costs of power systems in response to stochastically generated and deterministic power demand profiles over the useful lifetimes of the assets. For vessel applications, the code also estimates the volumes and masses of the alternative systems as a basis to judge their practicality. Hypothetical power systems for four archetypal ports and six vessel segments (across a range of power nodes) were studied to identify potential opportunities in and adjacent to the marine markets CAT currently serves.&#13;
The outcomes of the study align with conventional intuition regarding the application of the technologies considered. Under certain conditions, the results support the technoeconomic case for the implementation of battery technology on short-haul vessels whose operations are predictable and would not be disrupted by shortened refueling/recharging intervals. Similarly, the results show that adoption of small modular nuclear reactors at ports and on large vessels with consistently large baseload power demand can provide economic advantages over incumbent fossil fuel technologies. The results of the simulations are sensitive to several technology-agnostic parameters including discount rates, fuel and electricity prices, demand growth rates, and other macro-economic conditions. In future, with ample case-specific data, the code developed for this thesis may provide convincing justification for the adoption of an alternative technology to serve the power demand of an individual port or vessel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discrete Event Simulation as a Predictor for Factory Traffic Management</title>
<link href="https://hdl.handle.net/1721.1/163310" rel="alternate"/>
<author>
<name>Ramirez Echavarria, Esteban</name>
</author>
<id>https://hdl.handle.net/1721.1/163310</id>
<updated>2025-10-22T03:34:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Discrete Event Simulation as a Predictor for Factory Traffic Management
Ramirez Echavarria, Esteban
Manufacturing environments increasingly rely on automation and data-driven decision-making to optimize efficiency and production rates. This study explores the application of Discrete Event Simulation (DES) to model material flow and predict AGV (Automated Guided Vehicle), crane, and cart movements within a factory. The goal is to develop a digital twin that enables real-time decision-making, optimizes scheduling, and minimizes bottlenecks.&#13;
&#13;
To achieve this, we utilize SimPy, an open-source Python-based DES library, in conjunction with a custom-built API and React.js front-end interface. The study evaluates available DES software options and justifies the selection of SimPy based on flexibility, integration capabilities, and its suitability for modeling custom business rules. The solution is structured into modular components handling path planning, transporters, flows, stations, hot-cold starts, and utilities, ensuring adaptability to future improvements.&#13;
&#13;
A validation framework was established, utilizing historical data comparison and real-time validation to assess the simulation’s predictive accuracy. Over a 40-day testing period, the simulation achieved 89.6% accuracy and a sensitivity, or true positive rate (TPR), of 80.2%. The simulation provides a reliable first-pass scheduling tool that can be further refined with improved data collection.&#13;
&#13;
The findings indicate that while full automation of AGV deployment is not yet feasible, this study lays the foundation for future integration with the factory’s Vehicle Management System (VMS). Business implications include the potential for automated scheduling, enhanced material flow visibility, and optimization of capacity planning. Future work should focus on improving data accuracy, integrating live factory data streams, and refining algorithms for predictive scheduling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry</title>
<link href="https://hdl.handle.net/1721.1/163309" rel="alternate"/>
<author>
<name>Netteberg, Sofie F.</name>
</author>
<id>https://hdl.handle.net/1721.1/163309</id>
<updated>2025-10-22T03:34:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Strategy to Execution: An Optimization Approach to New Product Placement in the Apparel Industry
Netteberg, Sofie F.
This thesis presents the development and implementation of a new product placement optimization model for a large global apparel and footwear company’s supply chain, aimed at maximizing network-wide profits while aligning with long-term strategic goals amidst demand volatility. The model leverages a mixed-integer linear programming approach, integrating probabilistic demand simulations to optimize the placement of new products within the company’s existing network of third-party partner company factories. Key elements of the model, including decision variables, price and cost coefficients, an objective function, and constraints that reflect operational realities and strategic priorities, are discussed in detail. Through analysis and results validation, this research demonstrates how data-driven optimization can improve network profitability and adherence to companies’ long-term strategic supply chain objectives and develop networks that are more profitable. The thesis then includes an exploration of historic demand variability at the host company, followed by a recommendation to integrate probabilistic forecasting in network planning to generate production networks more robust to volatility in consumer product demand. The findings contribute to advancing data-driven decision-making in supply chain management and offer actionable insights for future product placement strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States</title>
<link href="https://hdl.handle.net/1721.1/163308" rel="alternate"/>
<author>
<name>Ni, Mengmeng</name>
</author>
<id>https://hdl.handle.net/1721.1/163308</id>
<updated>2025-10-22T03:34:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Policy Approaches and Entrepreneurial Responses in Strategic Industries: Comparing Innovation Ecosystems in China and the United States
Ni, Mengmeng
This thesis investigates how government policy approaches shape regional entrepreneurial ecosystems and influence entrepreneurial strategy in strategic industries across China and the United States. Through comparative analysis of four region-industry pairs—Shanghai's semiconductor sector, Shenzhen's drone technology sector, Boston's biotechnology cluster, and New York's fintech ecosystem—the study examines the dynamic interplay between institutional design and entrepreneurial behavior. Drawing on Porter's Cluster Theory, Mazzucato's Entrepreneurial State concept, and the MIT REAP framework, the research develops a novel policy categorization encompassing four innovation governance tools: Cluster and Crisis Response Tools, Innovation Ecosystem Tools, Market-Shaping Tools, and Institutional Restructuring Tools. A qualitative case study methodology is employed, with in-depth firm-level analyses of Biren Technology in Shanghai and Moderna in Boston illustrating how entrepreneurs strategically respond to distinct institutional environments. The findings reveal four distinct models of innovation governance: Shanghai’s state-directed coordination, Shenzhen’s regulatory experimentation, Boston’s market-based orchestration, and New York’s regulation-centered oversight. Across contexts, entrepreneurs emerge as interpretive agents who actively leverage, adapt to, and at times reshape institutional conditions. This thesis contributes to the literature by offering comparative insights into the co-evolution of public policy and entrepreneurial strategy. It also provides practical implications for policymakers designing innovation ecosystems and for entrepreneurs navigating increasingly complex regulatory and technological landscapes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163307" rel="alternate"/>
<author>
<name>Gosen Cappellin, Carlos Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/163307</id>
<updated>2025-10-22T03:34:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Developing a Data-Driven Approach to Reducing Excess Inventory in a Multi-Echelon Supply Chain
Gosen Cappellin, Carlos Daniel
The medical technology company MedTechCo, specifically its Spine division, has deployed millions of implants in hospitals to meet demand. When inventory deployment and allocation are not managed appropriately to ensure that products are in the right place at the right time, excess inventory arises. Currently, MedTechCo Spine holds large amounts of excess inventory that are not utilized effectively. &#13;
&#13;
The objective of this research is to leverage a data-driven approach to define and reduce implant excess inventory at scale for MedTechCo’s Spine business unit in the United States. The research strategy used in this thesis begins with a root cause analysis to understand the causes of excess inventory. A robust data model was then developed to determine appropriate inventory levels by SKU, map all excess field inventory, and prioritize the most valuable excess SKUs. This data model was used to&#13;
automate the company’s ERP system to repurpose excess inventory, limit unnecessary inventory deployments to the field, and eliminate redundant backorders. Finally, an impact analysis was performed to measure the potential excess inventory reduction in both dollar value and units. &#13;
&#13;
Time constraints limited the implementation of the recommendations during the research period. However, MedTechCo Spine agreed to incorporate the proposed recommendations into its ERP system and operational processes in mid-2025. These recommendations will help reduce implant excess field inventory, unlocking tied-up capital, creating flexibility in the supply chain to meet demand changes, and enabling additional investment in innovation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency</title>
<link href="https://hdl.handle.net/1721.1/163306" rel="alternate"/>
<author>
<name>Jaklis, Cyril</name>
</author>
<id>https://hdl.handle.net/1721.1/163306</id>
<updated>2025-10-22T03:34:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AI and ML in Real Estate Underwriting: Transforming Financial Decision-Making and Operational Efficiency
Jaklis, Cyril
Real estate is the world's largest untapped market, at $650 trillion (Statista, 2023), yet technological innovation, particularly in financial underwriting, is underrepresented. Excel spreadsheets, broker-driven data collection, and expensive public database subscriptions are still used by most institutional players and family offices. These outdated approaches result in inefficiencies and higher operational expenses. Firms are now waiting for more innovative tools to improve their workflows and predict their Net Operating Income (NOI). Development and maintenance costs are often underestimated due to optimistic estimates and unplanned material or labor cost price escalations. This paper examines how to increase the accuracy of underwriting by examining the full underwriting process, identifying operational inefficiencies, and analyzing how new technologies like Artificial Intelligence (AI) and Machine Learning (ML) are currently being utilized to better value properties and reduce error margins. The analysis covers the entire underwriting process, from data sourcing, collection, structuring, and analysis. It also reviews the platforms and software tools utilized to connect these phases, from initial appraisal to investment memo and investment committee (IC) decision-making. The objective is to understand practical constraints, recognize opportunities for optimization, and explore where investors can strategically position themselves to leverage these technologies while also providing a forward-looking outlook on the changing function of AI/ML in the sector over the next decade.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery</title>
<link href="https://hdl.handle.net/1721.1/163305" rel="alternate"/>
<author>
<name>Fenstermacher, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163305</id>
<updated>2025-10-22T03:34:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigation Into Sources of Volatility in Sortation Center Processes to Improve Productivity and On-Time Delivery
Fenstermacher, Andrew D.
Target Corporation has expanded its Last Mile Delivery (TLMD) capabilities through an omni-channel, "stores-as-hubs" strategy, using stores as fulfillment centers for online orders. The Target Sortation Centers was developed to receive packages from stores in the region, sort, route and dispatch these packages each day to accomplish faster delivery for online orders. Designed to never hold inventory, the goal is to have every package received delivered that same day. This presents new operational challenges common for brick-and-mortar retailers that develop an omni-channel strategy. This thesis investigates core processes in Sortation Centers to identify sources of volatility and propose improvements that enhance productivity and on-time delivery while minimizing labor costs and incomplete volume. Many of the current processes in Target’s Sortation Centers are manual and unstandardized. Moreover, improving operations and piloting changes is challenging, especially during peak seasons. To address these challenges, this study employs discrete event simulation (DES) using SimPy, informed by current operational data and in-person observations, to model and analyze current processes. Key findings reveal that pre-sorting TLMD volume from other national carrier volume at the stores prior to linehaul pick up for same day packages decrease the overall completion times for the day’s volume by 5.8% and lowers incomplete volume probability by up to 85% under excess volume scenarios. These process changes enhance site resilience to demand volatility without significant capital investment. The research underscores the value of DES for testing process improvements virtually and highlights the need for network-level optimization across Target’s omnichannel supply chain. Recommendations include piloting floor loading and pre-sorting in select markets, alongside future exploration of performance standards, automation, and standardized processes to further mitigate volatility impacts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer Vision for Cell Line Development</title>
<link href="https://hdl.handle.net/1721.1/163304" rel="alternate"/>
<author>
<name>Albright, Jackson A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163304</id>
<updated>2025-10-22T03:34:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computer Vision for Cell Line Development
Albright, Jackson A.
Anomalies in Cell Line Development prove to have significant impact on material and opportunity cost when screening for the Master Cell Bank that is used for all clinical drug development. Cell Line Development scientists spend hundreds of hours collectively identifying anomalies in fluorescent and brightfield imagery to ensure only high-performing cell clones are downselected for testing. The use of computer vision models alleviates this burden on scientists and better standardizes the selection process. Three techniques were tested for classifying anomalous and nominal fluorescent images: an autoencoder, an edge CNN and an RGB SVM. Examining performance through composite metrics such as F1 Score and MCC, the autoencoder (0.8744 and 0.8619, respectively) outperformed the edge CNN (0.8488 and 0.8257) and RGB SVM (0.8343 and 0.8252) for fluorescent anomaly classification. The high performance of the autoencoder came from training solely  on anomalous images and using a percentile-based threshold to classify images on their reconstruction error. Data robustness proved to be an issue, with certain test datasets having worse performance due to inherent variability of images within both nominal and anomalous classes. Gathering and labeling more datasets for training and testing will allow models to learn from this variability and provide higher confidence in model performance for real-time screening applications. Adjusting the structure of the traditional autoencoder to that of a variational autoencoder will also help with learning the variability of images within classes, and improve performance on previously unseen data. Overall, the current iteration of the models proves to be beneficial for anomaly detection in Cell Line Development and demonstrates that some modifications to data sourcing and model architecture could see even better performance. These same techniques could be applied to similar biopharmaceutical applications provided care is taken to properly source clean and labeled image data and construct appropriate model architectures for the images' inherent features.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes</title>
<link href="https://hdl.handle.net/1721.1/163303" rel="alternate"/>
<author>
<name>Bieske, Linn</name>
</author>
<id>https://hdl.handle.net/1721.1/163303</id>
<updated>2025-10-22T03:34:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensor simulation for autonomous vehicles: Diffusion based image and depth generation for driving scenes
Bieske, Linn
Background: Autonomous vehicle (AV) testing requires extensive real-world data collection, which is costly and time-consuming. Existing simulation techniques struggle to generate high-fidelity sensor data, particularly for multimodal signals like RGB camera images, LiDAR depth maps or LiDAR point clouds. Recent advances in generative AI, specifically diffusion models, offer a solution for improving synthetic driving scene simulations.&#13;
&#13;
Objective: This thesis enhances diffusion-based generative models to: 1) Encode LiDAR depth data into a stable diffusion model’s latent space, 2) Generate simultaneously, consistently and with high fidelity eight RGB camera images, 2D LiDAR depth maps and 3D LiDAR point clouds for a full 360-degrees range, and 3) Evaluate the realism and consistency of the generated sensor data.&#13;
&#13;
Methods: A multimodal, multi-view latent stable diffusion model was trained to generate complete 360’ synthetic driving scenes and simulate camera and LiDAR sensor signals for autonomous vehicles. The generated scenes were evaluated for sensor alignment, realism, and depth accuracy.&#13;
&#13;
Results: The diffusion model produced realistic, spatially consistent camera and LiDAR sensor data, reducing reliance on real-world validation miles and lowering AV testing costs. To further improve the quality of the multimodal driving scene generation it is recommended to retrain the VAE on LiDAR data. &#13;
&#13;
Conclusion: This work advances AV simulation by extending stable diffusion models to multimodal sensor data. Future improvements should focus on real-time generation and expanding to additional sensor types or hardware setups for enhanced simulation fidelity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs</title>
<link href="https://hdl.handle.net/1721.1/163302" rel="alternate"/>
<author>
<name>Liu, Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/163302</id>
<updated>2025-10-22T03:34:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predictive Modelling of Customer Membership Purchases to Minimize Marketing Costs
Liu, Ying
This thesis develops and evaluates a series of predictive models to improve the efficiency of marketing resource allocation in the context of an outbound campaign for a premium membership product. The central objective is to identify customers most likely to respond positively to a membership offer, thereby minimizing outreach costs and maximizing return on investment. The study leverages a dataset from a large retail superstore that includes customer demographics, transactional behavior, and campaign response history. Data preprocessing involved the creation of engineered features such as age and tenure groupings and the transformation of categorical variables into factor types suitable for classification algorithms. Three modeling approaches were applied: classification with logistic regression, classification and regression trees (CART), and random forest. Logistic regression yielded strong predictive performance with an AUC of 0.851 and identified several statistically significant predictors, including spending on wine and meat products, recent purchase behavior, and tenure length. However, its primary limitation lies in its inability to accommodate cost asymmetries, as it lacks the capacity to incorporate a loss matrix which assigns different penalty to false positives and false negatives. The CART model addressed this limitation by introducing a customized loss matrix that reflects the asymmetric cost structure of marketing misclassifications—assigning a higher penalty to false negatives than to false positives. While this cost-sensitive structure aligned better with business objectives, the CART model achieved a moderate AUC of 0.767, reflecting limited classification accuracy and robustness. To overcome these limitations, a Random Forest model was implemented, combining the strengths of ensemble learning with cost-sensitive training. It achieved the highest AUC of 0.864 and allowed for the integration of a loss matrix during training. Feature importance analysis revealed that variables such as number of days since the last purchase, the amount spent on meat products, and a customer's enrollment length with the company were among the most influential predictors of customer response. The model not only improved classification performance but also supported strategic targeting through interpretable outputs. An economic evaluation demonstrated the practical value of the predictive model. Under a loss matrix where the cost of a false positive was set to $2 and a false negative to $10, the Random Forest model reduced total campaign costs by approximately 30% compared to a non-targeted approach. This cost savings translates into a meaningful economic impact, particularly when applied to large-scale campaigns. Overall, the findings support the use of Random Forest with a cost-sensitive design as a superior modeling framework in marketing applications. By aligning machine learning with real-world cost structures, this approach offers both statistical rigor and economic relevance for data-driven decision-making in customer acquisition strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data, Analytics, and Optimization for Production Planning</title>
<link href="https://hdl.handle.net/1721.1/163301" rel="alternate"/>
<author>
<name>Malinowski, Maxwell X.</name>
</author>
<id>https://hdl.handle.net/1721.1/163301</id>
<updated>2025-10-22T03:34:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data, Analytics, and Optimization for Production Planning
Malinowski, Maxwell X.
This thesis serves as a case study for the implementation of data analytics and optimization within a high mix low volume electronics production environment in the Aerospace and Defense industry. This case study demonstrates the benefits of data analysis for defining and quantifying operational bottlenecks and explores the implementation of an optimization model to better allocate resources for production planning. Results demonstrate the insights derived from using data and analytics in this environment, and further discussion explores what contributes to an effective implementation of an optimization model in a production setting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Debt Complexity and Equity Behavior</title>
<link href="https://hdl.handle.net/1721.1/163299" rel="alternate"/>
<author>
<name>Li, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/163299</id>
<updated>2025-10-22T03:33:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Debt Complexity and Equity Behavior
Li, Jack
I examine how the complexity of firm debt affects the incorporation of news into equity prices. As residual claimants to firm cash flows, equity investors must be able to value all outstanding debt contracts, suggesting that complex debt can interfere with their ability to process news effectively. Using a model in which debt complexity causes a subset of investors to initially underweight news precision, I derive three predictions for the equity behavior of debt-complex firms around news events: (1) they exhibit greater post-announcement drift, (2) they show elevated trading volume both on announcement day and in the post-announcement period, and (3) their return volatility decreases on announcement day but increases during the post-announcement period. These predictions are supported by empirical evidence in the context of earnings announcements, suggesting that debt complexity introduces meaningful frictions in how news is incorporated into equity markets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping Wildness: Simulating Post-Extraction Wildland Regeneration</title>
<link href="https://hdl.handle.net/1721.1/163298" rel="alternate"/>
<author>
<name>Griggs, Crystal Ling</name>
</author>
<id>https://hdl.handle.net/1721.1/163298</id>
<updated>2025-10-22T03:34:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mapping Wildness: Simulating Post-Extraction Wildland Regeneration
Griggs, Crystal Ling
This thesis introduces a novel approach to wildlife habitat classification for ecological regeneration. It is focused by the extreme environmental degradation of mountaintop removal (MTR) in the Appalachian Mountains, a violent coal extraction process that has significantly altered the landscape of this ecologically sensitive region. By integrating remote sensing and Geographic Information Systems (GIS) with machine learning, this research aims to develop a method that transcends traditional human egocentric landscape assessments, advocating for a model that foregrounds the habitats and needs of critically endangered species by simulating landscape regeneration and assessing topographical alterations in terms of how design decisions impact wildlife. Central to this study is the concept of Umwelt, the subjective experiences of nonhuman species, including how their spatial perception and spectrum are used to discern details within their environment. Umwelt broadens traditional spatial understanding by emphasizing that each species experiences the world through its sensory filters, which shape its interactions within their habitat. This understanding guides the research’s approach to approximating the Umwelt of the Cerulean Warbler (Setophaga cerulea), a surrogate species in this work, which has faced steep declines due to habitat loss in Appalachia. Through the development of a habitat suitability model that utilizes advanced computational tools and multispectral imagery, the thesis endeavors to offer a new perspective on environmental planning and conservation efforts - a computational approach to near-approximations of Umwelt. The methodological framework seeks not only to classify post-extraction landscapes for their potential in supporting wildlife but also to inform design and land use decisions that are sensitive to the temporal and complex processes of natural habitat regeneration. By challenging the prevailing paradigms of landscape restoration, which often lack consideration for the intricacies of wildland dynamics such as the multitudes of species interactions and interdependencies, this research proposes a new methodology that empowers wildlife to guide the ecological recovery process. The findings underscore the potential of applied GIS and machine learning in environmental advocacy, setting a precedent for future research and practice aimed at the regeneration of ecosystems that considers the ecological realities of all species involved.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications</title>
<link href="https://hdl.handle.net/1721.1/163297" rel="alternate"/>
<author>
<name>Ray, Jennifer</name>
</author>
<id>https://hdl.handle.net/1721.1/163297</id>
<updated>2025-10-22T03:33:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hydrogen Adoption Dynamics: A Flexible Modeling Framework for U.S. Industrial Applications
Ray, Jennifer
As climate change concerns drive the need for decarbonization, hydrogen stands as a potential tool to help reduce emissions across the United States industrial and energy sectors. This thesis develops a flexible modeling framework for hydrogen adoption across multiple industrial applications, designed specifically to support strategic investment decision-making in an evolving market. The tool analyzes six major industries – steel, chemicals, energy storage, biofuels, vehicles, and natural gas– through two metrics: potential hydrogen consumption and threshold prices for economic viability. The framework applies scenario analysis to examine how government policy and technological advancement influence potential market trajectories.  &#13;
&#13;
Analysis reveals significant sensitivity to input assumptions. Even small variations in the assumed initial hydrogen production cost can result in significantly different adoption timelines. In scenarios where initial hydrogen production costs are $5/kg, widespread adoption requires maximum policy support and technological progress. However, reducing the initial cost by just $1 to $4/kg makes broader adoption feasible with less reliance on government intervention. Light-duty fuel cell electric vehicle penetration rate and steel industry growth rate emerge as the most sensitive parameters affecting overall hydrogen demand, followed by biofuel blending rate and hydrogen injection percentage into natural gas infrastructure.&#13;
The vehicles industry is identified as a first mover in widespread hydrogen adoption, followed by steelmaking and methanol production. Hydrogen adoption for natural gas blending, methanol for export, and methanol-to-gasoline applications occur later due to their lower threshold price for economic viability. Under optimal conditions with strong government support and significant technological advancements, total hydrogen demand could reach 48.8 million metric tons by 2050, approximately a sevenfold increase from scenarios with minimal support.&#13;
The tool’s value lies not in projecting a definitive, single-point forecast, but in providing a flexible framework that helps stakeholders navigate market uncertainties as the decarbonization landscape evolves. Future research should integrate supply-side dynamics, infrastructure requirements, and geographic variability to enhance projection accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Green Aluminum</title>
<link href="https://hdl.handle.net/1721.1/163296" rel="alternate"/>
<author>
<name>Schurr, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/163296</id>
<updated>2025-10-22T03:34:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Green Aluminum
Schurr, Kevin
Aluminum is an important metal to facilitate the energy transition. Its high strength to weight ratio and easy recyclability make it a useful material in many industries from automobiles to food packaging. However, the aluminum smelting process accounts for 2% of all global greenhouse gas emissions due to both the high amount of power needed to facilitate the electrolysis reaction and to the consumption of carbon anodes in the process. As regulatory changes in Europe raise the monetary cost of emitting carbon, smelters are investigating new technologies to integrate into their operations to cut Scope 1 and 2 emissions. Two such technologies are carbon capture systems to abate process emissions and small modular nuclear reactors to reduce emissions incurred during electric power generation. This work explores the technical and economic feasibility of leveraging these systems at Aluminum of Europe, a primary aluminum smelter subject to these changing European regulations. Results suggest that while these technologies have not been specifically adapted for aluminum production yet, they can play an important role in reducing the overall emissions from the smelting process under specific economic conditions. However, the analysis indicates that, at present, significant subsidies are required for such projects to be financially viable.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care</title>
<link href="https://hdl.handle.net/1721.1/163294" rel="alternate"/>
<author>
<name>Dugan, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/163294</id>
<updated>2025-10-22T03:33:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fully Connected Digital Ecosystems within Hospitals – AI/ML Solutions for Improved Patient Care
Dugan, Andrew D.
Cardiogenic shock (CS) in the context of acute myocardial infarction (AMI) remains a significant challenge in critical care, with high mortality rates despite the availability of advanced mechanical circulatory support (MCS) devices like the Impella pump. However, adoption of these devices in clinical practice remains limited. This thesis explores two complementary strategies to address these challenges: developing machine learning (ML) models to predict shock severity and assessing the feasibility of integrating hospital Electronic Medical Record (EMR) data into Abiomed’s digital ecosystem to support standardized shock care.&#13;
In the first phase, ML models were trained on multiple clinical datasets to predict Society for Cardiovascular Angiography and Interventions (SCAI) shock stages based on patient data. While these models demonstrated strong predictive performance, feature analysis revealed that SCAI stages often reflect physician treatment decisions rather than purely patient physiology. This raises concerns about their utility as real-time clinical decision tools and suggests that ML applications may be better suited to prompting early data collection and intervention before severe shock develops.&#13;
The second phase evaluated the feasibility of EMR integration to support the broader adoption of standardized shock protocols. After considering regulatory, operational, and technical factors, third- party data aggregation emerged as the most practical path forward. Integrating EMR data could improve outcome tracking, support protocol adoption, and strengthen partnerships between Abiomed and hospitals, creating a foundation for more consistent and proactive shock management.&#13;
Together, these findings highlight the need for predictive tools that guide early clinical action and infrastructure that supports seamless data integration. By advancing both, Abiomed can expand its role in cardiogenic shock care, improve patient outcomes, and lead the evolution of data-driven, standardized treatment strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape</title>
<link href="https://hdl.handle.net/1721.1/163293" rel="alternate"/>
<author>
<name>Tike, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/163293</id>
<updated>2025-10-22T03:34:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Recommendations for Legacy Automakers in the Evolving Automotive Landscape
Tike, Gauri
The automotive industry is undergoing a transformative shift with technological advancements in many areas such as Electric cars, Autonomous vehicles, Software Defined Vehicles, and decarbonization of mobility. Alternate means of transportation are also becoming available and sometimes even the cost is even lower than owning a car. The best way to get from point A to point B might not be the car in some of the cities. It might involve heterogenous modes of public transportation, using a bike, using ride hailing service or using a car for different portions of the route. Despite the concerns about the environment, we are still seeing an increase in global car ownership trends. These changing times pose challenges to legacy automakers. While they are experts in traditional car manufacturing, modern cars not only require traditional mechanical and electrical skills but also need deep expertise in developing software for these cars. With the growing EV adoption, we are seeing Chinese EV automakers are capturing market share quickly. What is the future of mobility with all these developments? What do traditional automakers need to do in this era to remain successful? In this report we will examine key trends in mobility: Global electric vehicles (EVs) adoption, software-defined vehicles (SDVs), autonomous vehicles (AVs), environmental implications. Based on this research we will propose strategic recommendations for traditional automakers in order to continue their success over the next decade and beyond.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Fintech Innovations: Strategic Insights from the United States and India</title>
<link href="https://hdl.handle.net/1721.1/163292" rel="alternate"/>
<author>
<name>Shanbhag, Rishabh Ganesh</name>
</author>
<id>https://hdl.handle.net/1721.1/163292</id>
<updated>2025-10-22T03:34:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Navigating Fintech Innovations: Strategic Insights from the United States and India
Shanbhag, Rishabh Ganesh
This thesis examines how fintech ventures are reshaping financial services through new technologies and strategic choices tailored to different markets. It first looks at key innovations: digital payments, digital wealth management, and open banking, and how they have transformed everyday financial activities. The research then compares how fintech companies operate in the US and India by analyzing how market conditions, government initiatives, regulations, and consumer behaviors shape adoption. Finally, through case studies of Robinhood (US), Revolut (Global), and Paytm (India), the thesis examines how fintech firms navigate the choice between competing with traditional players and collaborating with them to scale under different market scenarios. Together, these insights aim to help entrepreneurs, investors and policymakers understand how strategy and technology come together in the fintech industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163291" rel="alternate"/>
<author>
<name>Harkavy, Rachael</name>
</author>
<id>https://hdl.handle.net/1721.1/163291</id>
<updated>2025-10-22T03:34:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Forming the Future: A Digital Approach to Simulating Thermoplastic Manufacturing
Harkavy, Rachael
This thesis develops a digital framework for simulating and validating thermoplastic composite manufacturing processes, focusing on reducing the time associated with new product development. Using Finite Element Analysis (FEA) software (SimSof) and high-precision 3D scanning tools (ScanSof), the research introduces a geometric similarity metric to quantify deviations between simulated and real-world parts. By aligning simulations with production data, the study aims to replace costly physical trials with reliable digital models, accelerating customer onboarding and improving&#13;
manufacturing efficiency.&#13;
&#13;
Key contributions include establishing a systematic pipeline for integrating simulation tools into Oribi Composites’ workflow, defining critical parameters such as laminate width, material card accuracy, and mesh size, and validating their impact on simulation accuracy. Results demonstrate that accurate material modeling and parameter selection significantly enhance digital twin accuracy, while mesh size has minimal influence, allowing for computational cost savings. The research also highlights challenges in replicating real-world conditions digitally, including inconsistent material cards, and limited control over pressure profiles. Despite these limitations, the study proves that simulations can reliably predict manufacturable designs within&#13;
customer tolerances, reducing reliance on physical iterations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement</title>
<link href="https://hdl.handle.net/1721.1/163290" rel="alternate"/>
<author>
<name>Imaeda, Hiroko</name>
</author>
<id>https://hdl.handle.net/1721.1/163290</id>
<updated>2025-10-22T03:34:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward a Sustainable and Scalable Ecosystem: Breaking the Cycle of Intergenerational Poverty for Single Mothers in Japan with Private Sector Engagement
Imaeda, Hiroko
Despite Japan’s reputation as an economically advanced nation, it faces one of the highest relative poverty rates among OECD countries, with nearly half of all single-mother households living below the poverty line. This thesis examines why poverty among single mothers persists despite a formal support ecosystem and proposes a systemic redesign grounded in life-stage-aligned, user-centered principles. Drawing on historical-institutional analysis, organizational theory, fieldwork interviews, and auto-ethnographic insights, the study identifies deeply embedded barriers that reinforce fragmented, crisis-oriented support systems misaligned with real-life trajectories. In response, it introduces the "Single Mother Journey" framework, reframing single mothers not as a static category but as a dynamic population with distinct, evolving needs. Through this lens, the thesis exposes critical gaps in preventive support, labor market misalignment, and information accessibility. Building on these findings, it proposes a future-ready support ecosystem, positioning corporations, local municipalities, NPOs, and education institutions as collaborative actors. It presents mumtec, a conceptual digital platform designed to consolidate fragmented services, personalize interventions by life stage, predict crisis points, and generate adaptive policy feedback. The thesis moves beyond surface-level critique by connecting institutional analysis with practical system design to offer a scalable framework for inclusive innovation. Listening to the silent voices of single mothers navigating precarity is an ethical imperative and a strategic necessity for sustainable, resilient societies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing automotive production scheduling to reduce finished vehicle inventory</title>
<link href="https://hdl.handle.net/1721.1/163289" rel="alternate"/>
<author>
<name>Johnson, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/163289</id>
<updated>2025-10-22T03:33:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing automotive production scheduling to reduce finished vehicle inventory
Johnson, Christopher
This thesis addresses inefficiencies in automotive finished vehicle inventory management arising from misalignment between production scheduling and outbound logistics. Traditional production planning prioritizes manufacturing efficiency, causing significant inventory accumulation as vehicles await completion of full shipment loads. This research proposes an Integrated Production and Outbound Distribution Scheduling approach, introducing an optimization step within existing production scheduling workflows to align production sequences for expedited load formation. Back-testing on two automotive assembly lines over 82 weeks reveals a mean inventory reduction potential of 63–65%, with variability influenced by production volumes and vehicle configurations. A proof-of-concept implementation confirms the practical feasibility of optimized schedules, reducing inventory holding times by 33% without disrupting manufacturing operations. Computational performance analysis demonstrates good scalability for instances with fewer than 600 vehicles, though larger scenarios still yield meaningful inventory reductions. This work highlights substantial opportunities for automotive original equipment manufacturers to enhance efficiency by integrating outbound logistics into production scheduling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management</title>
<link href="https://hdl.handle.net/1721.1/163288" rel="alternate"/>
<author>
<name>Gallardo Moncayo, Gabriel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163288</id>
<updated>2025-10-22T03:34:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transforming Unstructured Data into Actionable Insights: A Use Case of Generative AI in Operational Technology Problem Management
Gallardo Moncayo, Gabriel A.
The increasing availability and reduced cost of Generative AI applications for the general public have motivated organizations across all industries to implement AI-based solutions in their daily operations. Still, they struggle to determine the capabilities and limitations of this technology when implementing it in their specific context. This thesis addresses these challenges through a practical case study: deploying a text-based Generative AI system (using Large Language Models - LLMs) for automated downtime event characterization within a global industrial operational technology (OT) setting by transforming unstructured&#13;
problem management reports into structured, actionable business insights. The developed software system contains a data pre-processing stage, followed by four LLM-based tasks (LLM-extraction, LLM-autoclassification, multi-aspect multi-level LLM-classification, and LLM-accuracy). We wrap everything in a well-structured and easy-to-understand evaluation framework that ensures the system’s output is format-reliable, accurate, and consistent. Through simple prompt engineering techniques and continuous failure modes analysis, we achieve high accuracy (&gt;89%) and consistency (&gt;79%) for downtime events characterization at 1% of the current cost. In the end, we prove that it is possible to implement an AI-based solution within current operational processes while properly communicating its capabilities and limitations and adapting its usage to the most added value purpose.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support</title>
<link href="https://hdl.handle.net/1721.1/163287" rel="alternate"/>
<author>
<name>Gebner, Adam R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163287</id>
<updated>2025-10-22T03:33:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Raw Wire Inventory Management: A Data-Driven Approach to Demand Forecasting and Supply Chain Decision Support
Gebner, Adam R.
This thesis investigates methods to improve demand forecasting and inventory management for raw wire. Challenges such as supply chain disruptions from the COVID-19 pandemic, operational variability, and loss of expertise exposed vulnerabilities in the existing manufacturing system, leading to shortages and inefficiencies. By leveraging extensive production data, this research develops and evaluates tools to mitigate these issues while aiming for a 100% service rate.&#13;
The project leveraged extensive production data to predict future wire requirements, optimize inventory, and achieve a 100% service rate. Key contributions include:&#13;
1. A data-driven demand simulation model, reducing forecast error and surpassing&#13;
baseline methods&#13;
2. Quantification of waste distributions and variability in wire consumption&#13;
3. An inventory simulation framework for policy evaluation and shortage mitigation&#13;
4. Clustering analysis to classify demand patterns and identify key wire categories&#13;
5. A decision support tool supporting real-time visibility into inventory levels and risks&#13;
The models and tools developed through this project provide enhanced capabilities to predict future wire requirements and manage inventory more effectively through continued development. Though the initial results indicate potential business value, areas for future work include incorporating additional data sources, exploring advanced machine learning techniques, and conducting longer-term pilot studies to quantify business impact. This project demonstrates the value of leveraging data analytics and simulation modeling to enhance supply chain decision-making in complex manufacturing environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction</title>
<link href="https://hdl.handle.net/1721.1/163286" rel="alternate"/>
<author>
<name>Gerbino, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/163286</id>
<updated>2025-10-22T03:34:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Economies of Space: Developing a Lean Manufacturing Framework for Work Center Floorspace Reduction
Gerbino, Jacob
This thesis aims to develop a lean manufacturing framework with the goal of optimizing the use of floorspace in Boeing's Interiors Responsibility Center South Carolina (IRCSC). The primary goal is to eliminate wasted floorspace while increasing production capacity and efficiency. The motivation behind this project stems from the need to address the fully allocated production floorspace at IRCSC and the pressing requirement to add new product lines without expanding the facility's physical footprint. Additionally, the project seeks to prepare IRCSC for possible increases in production rates for the 787 Dreamliner Program, necessitating a redesign of work centers to support higher output levels while enhancing efficiency and reducing costs.&#13;
&#13;
The project employs the DMAIC (Define, Measure, Analyze, Improve, Control) methodology and lean tools such as spaghetti diagramming and value stream mapping to treat "Misused Space" as an additional form of waste, alongside the traditional forms of lean waste. The framework was applied to a sample interior product work center to test its effectiveness. The study involved mapping the current layout, observing technician travel, conducting time studies, and analyzing value stream maps. The methodology facilitated the creation of a new floorplan and scheduling system that consolidates cure times and balances workloads between work cells. Discrete event simulation was used to validate the proposed changes, ensuring they would achieve the desired improvements.&#13;
&#13;
The results of the study revealed inefficiencies in the current layout and scheduling practices of the work center. The proposed changes demonstrated a potential 25% reduction in floorspace and a 55% decrease in product throughput time. The new scheduling and work allocation strategy reduced product throughput time from nine days to four, and the new layout reduced worker travel distances by as much as 50% in some work cells. The lean manufacturing principles and scheduling optimizations discussed in this thesis should be applied to other work centers within IRCSC. Future research should explore advanced methodologies and tools to handle the complexities of more interconnected work centers.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers</title>
<link href="https://hdl.handle.net/1721.1/163285" rel="alternate"/>
<author>
<name>Venkatanarayanan, Sriya</name>
</author>
<id>https://hdl.handle.net/1721.1/163285</id>
<updated>2025-10-22T03:33:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Impact of AI Integration in Healthcare: Exploring Regulatory,&#13;
Cultural, and Strategic Barriers
Venkatanarayanan, Sriya
This thesis investigates the barriers and enablers to predictive AI adoption in healthcare through a thematic synthesis of 13 academic articles and real-world case studies published over the last five years. Barriers were categorized into three domains: regulatory, cultural, and strategic. These included challenges such as fragmented regulation, clinician skepticism, data quality limitations, and poor alignment with clinical workflows. Cross-cutting patterns, stakeholder tensions, and recurring meta-themes revealed that these barriers are deeply interconnected. Drawing from over 200 individual findings, an actionable visual framework was developed to guide responsible and sustainable predictive AI integration. The proposed model, consisting of an internal “Pyramid” of enablers and an external “Circular Loop” of ecosystem conditions, provides a practical structure for aligning governance, engagement, and workflow with ongoing commitments to equity, collaboration, safety, and transparency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative AI in Private Equity for Accumulative Advantage</title>
<link href="https://hdl.handle.net/1721.1/163284" rel="alternate"/>
<author>
<name>Mahajan, Bonny</name>
</author>
<id>https://hdl.handle.net/1721.1/163284</id>
<updated>2025-10-22T03:33:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative AI in Private Equity for Accumulative Advantage
Mahajan, Bonny
This research explores the use of Generative AI (Gen AI) for achieving accumulative gains across various business and technical functions within commercial enterprises under private equity firms. While based on applied experiments in a private equity-owned, resource constrained portfolio company, many of the findings presented here may apply in other types of organizations.Through this study, we conduct case studies across key departments such as customer service, purchasing, engineering, employee management, and marketing. For each use case, we delve into the utilization of custom-built or publicly available Gen AI-based tools, aiming to understand the unique considerations and challenges that may arise when implementing Gen AI solutions in industries like manufacturing, which have traditionally been underserved by the tech sector.Through this research, we identify the critical role of humans in the loop, emphasizing the importance of UI/UX design, domain expertise, and local culture in the successful adoption and acceptance of Gen AI tools designed to enhance workforce efficiency in portfolio companies. This study also aims to illustrate how investing in Gen AI technologies is ultimately an investment in a company’s most valuable resource—its employees. By equipping employees with innovative tools, the organization not only improves productivity and job satisfaction but also fosters a culture of continuous improvement and adaptability. This research highlights the transformative potential of Gen AI in reshaping traditional business processes and driving sustainable growth in different functions of organizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Standard Work for High Mix Low Volume Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163283" rel="alternate"/>
<author>
<name>McNulty, Will</name>
</author>
<id>https://hdl.handle.net/1721.1/163283</id>
<updated>2025-10-22T03:33:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Standard Work for High Mix Low Volume Manufacturing
McNulty, Will
This thesis examines the challenges of developing standard work at scale in a high-mix low-volume (HMLV) manufacturing environment. The research is conducted at Re:Build Composite Resources, a thermoset composites (TSC) manufacturer. In the context of the company, impending growth demands more skilled laminators and the manual, complex nature of TSC lamination exposes the need for improved and documented standard procedures. By documenting existing processes through operator shadowing, time studies, and quality data analysis a “best-known” standard was created for the production steps of a subset of parts. Two pilot parts—one focused on cutting scrap rates, the other on boosting throughput—demonstrated how standard work instructions and a standard work schedule designed for one-piece flow significantly reduced errors and production variability. The thesis also explores the effectiveness and limitations of using computer vision as a tool to automate work instruction and time study data set generation. Beyond the immediate improvements in quality, efficiency, and new operator onboarding, the project’s scalable framework lays out a roadmap for broader adoption&#13;
of standard work in fast-growing HMLV operations. By focusing first on parts that yield the most significant gains — either due to high volume or high unit cost — organizations can maximize returns on continuous-improvement efforts while not overburdening their engineering staff with excess analysis and documentation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/163282" rel="alternate"/>
<author>
<name>Bhirgoo, Priya Darshini</name>
</author>
<id>https://hdl.handle.net/1721.1/163282</id>
<updated>2025-10-22T03:34:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating The Feasibility of Electrified Process Heating&#13;
for Drug Substance Manufacturing
Bhirgoo, Priya Darshini
The pharmaceutical industry relies on high-temperature fluids such as pure steam to support critical operations including equipment cleaning and sterilization and on hot Water-For-Injection (WFI) as a key ingredient for drug substance manufacturing. These high-temperature process-driven heat demands are fulfilled through fossil fuel-based heating which contributes significantly to Scope 1 carbon emissions. Recognizing the link between environmental stressors and human health, Amgen has committed to achieving carbon neutrality by 2027. This thesis explores the feasibility and implications of transitioning from fossil fuel-based process heating to a fully electric system at one of Amgen’s drug substance manufacturing sites. Amgen’s existing fossil fuel-based steam system was analyzed through site visits, engineering reviews, and stakeholder engagements to quantify capital and operating costs, energy usage, and carbon emissions. A fully electric alternative was designed by researching commercial technologies and collaborating with suppliers as well as internal stakeholders. The analysis found that while the capital investment required for electrification is comparable to that of traditional steam systems, the operating costs for an electric system are significantly higher, driven by the higher price of electricity relative to natural gas. From a sustainability perspective, electrification eliminates on-site Scope 1 carbon emissions but shifts emissions to Scope 2, making the environmental benefit dependent on the carbon intensity of the local electricity grid. As grids transition to renewable energy sources, the potential for long-term emissions reductions strengthens. Future work should focus on evaluating the costs of necessary electrical infrastructure upgrades and identifying regions with lower-carbon, lower-cost electricity grids where electrified systems could be more readily implemented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations</title>
<link href="https://hdl.handle.net/1721.1/163281" rel="alternate"/>
<author>
<name>Tchelikidi, Cloe</name>
</author>
<id>https://hdl.handle.net/1721.1/163281</id>
<updated>2025-10-22T03:33:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Partnerships as Retention Levers: A Study of Credit Card–Entertainment Collaborations
Tchelikidi, Cloe
In mature, competitive sectors such as financial services and media and entertainment, customer loyalty is increasingly difficult to sustain. This thesis explores the emergence of cross-industry partnerships, specifically between credit card issuers and digital entertainment platforms, as a strategic response to rising churn and declining differentiation. Through a case study of the American Express Digital Entertainment Credit, the research examines how lifestyle-aligned benefits can foster deeper behavioral engagement, reduce switching, and enhance customer lifetime value. The thesis situates these partnerships within the broader evolution of loyalty strategies, marked by hyper-personalization, subscription fatigue, and platform convergence. Findings suggest that flexible, recurring rewards embedded in consumers’ daily routines offer a path to durable retention, especially among younger, digital-native cohorts. The study concludes that such partnerships are not peripheral marketing tools but increasingly core to competitive strategy in commoditized markets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry</title>
<link href="https://hdl.handle.net/1721.1/163280" rel="alternate"/>
<author>
<name>Wu, Lanchen</name>
</author>
<id>https://hdl.handle.net/1721.1/163280</id>
<updated>2025-10-22T03:33:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring the Dynamics of Regulatory Compliance, Cost Management, and Competition in the Pharmaceutical Industry
Wu, Lanchen
This paper explores how financial pressures, regulatory enforcement, and market dynamics interact to shape pharmaceutical manufacturing quality and drug supply stability. Using a causal loop diagram (CLD), it examines how cost-cutting behavior affects control and validation capabilities, interacts with regulatory agency oversight, and contributes to recurring drug shortages. The analysis highlights how competition drive companies to operate at or near the minimum regulatory requirements, gradually eroding quality systems. Because of the nature of medical products, the quality of a drug cannot be directly assessed by individual users, distributors, or payers, making it necessary for government agencies like the FDA to rely on internal manufacturing data to ensure all drugs meet a minimum standard of quality. Regulatory oversight serves as a safeguard rather than a tool for guiding business decisions. However, its effectiveness is constrained by the frequency of inspections, the capacity of auditors, and limited resources—especially when government budgets are stretched and other priorities take precedence. The paper also discusses how manufacturers may avoid detection by strategically presenting information during inspections, making it harder for auditors to spot issues and allowing weakened controls to persist. Over time, these dynamics reinforce one another, creating a self-sustaining cycle in which cost pressures lead to a minimal compliance, quality issues, and regulatory responses that increase costs further. &#13;
As the number of manufacturers shrinks due to market consolidation, supply disruptions become more severe when failures occur. Regulatory discretion—intended to avoid immediate shortages—can unintentionally reduce incentives for long-term quality investment, further weakening the system’s resilience. &#13;
To address these issues, the paper proposes structural changes, including financial accountability for payers during shortages, tighter regulatory focus on process reliability, and linking regulatory flexibility to quality improvement obligations. These approaches aim to create balancing mechanisms that reduce cost-driven deterioration of quality and promote a more stable pharmaceutical supply chain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ant Group’s Transformative Impact on China’s Financial Industry</title>
<link href="https://hdl.handle.net/1721.1/163279" rel="alternate"/>
<author>
<name>Pan, Kathryn</name>
</author>
<id>https://hdl.handle.net/1721.1/163279</id>
<updated>2025-10-22T03:33:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ant Group’s Transformative Impact on China’s Financial Industry
Pan, Kathryn
Ant Group, China’s leading digital finance company, has fundamentally transformed the nation’s financial industry through groundbreaking innovations in digital payments, micro-lending, wealth management, and investment advisory. This paper explores the company’s role in reshaping China’s financial ecosystem, analyzing its impact on traditional banking institutions, regulatory policies, and consumer behavior. Utilizing analytical frameworks such as Porter’s Five Forces, PEST analysis, and SWOT analysis, this study provides a comprehensive assessment of the external and internal factors influencing Ant Group’s development and competitive positioning.&#13;
This research highlights Ant Group’s key financial innovations, including its online transaction platform, offline payment services, online credit solutions, digital fund distribution channels, and AI-driven investment advisory. By leveraging advanced technologies such as artificial intelligence, blockchain, and big data analytics, Ant Group has enhanced service efficiency, expanded accessibility, and strengthened risk management capabilities. These innovations have significantly advanced financial inclusion, extending financial services to previously underserved populations. However, Ant Group’s rapid growth has also intensified regulatory scrutiny, prompting major restructuring efforts and adjustments to its business model.&#13;
This paper employs three major analytical frameworks: PEST analysis, Porter’s Five Forces, and SWOT analysis. The PEST analysis examines the political, economic, social, and technological factors shaping Ant Group’s trajectory, highlighting the impact of evolving government policies and macroeconomic conditions on its operations. Meanwhile, Porter’s Five Forces framework assesses the competitive dynamics within China’s financial sector, identifying key market pressures such as rising competitions and regulatory constraints. Finally, the SWOT analysis evaluates Ant Group’s internal strengths and weaknesses, as well as external opportunities and threats, offering a comprehensive perspective on the company’s strategic positioning.&#13;
Drawing from these analyses, the paper offers strategic recommendations to ensure Ant Group’s sustained growth and resilience in an increasingly complex financial environment. These recommendations include strengthening regulatory compliance, fostering strategic alliances with both domestic and international partners, and further leveraging technological advancements to expand its service offerings. Additionally, the study explores potential global expansion strategies, considering how Ant Group can adapt its innovative financial solutions to international markets while navigating diverse regulatory landscapes.&#13;
By examining Ant Group’s evolution and the broader implications of its digital finance model, this study contributes to a deeper understanding of fintech’s disruptive power in China’s financial sector. The findings provide valuable insights for industry leaders, policymakers, and scholars interested in the intersection of financial technology, regulation, and strategic business management. As digital finance continues to evolve, Ant Group’s trajectory serves as a critical case study in balancing innovation, regulation, and market competition within a rapidly shifting financial landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput</title>
<link href="https://hdl.handle.net/1721.1/163278" rel="alternate"/>
<author>
<name>Sircar, Julia Sarita</name>
</author>
<id>https://hdl.handle.net/1721.1/163278</id>
<updated>2025-10-22T03:33:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Process Optimization and Proactive Quality Control to Increase Investment Casting Throughput
Sircar, Julia Sarita
Blue Origin is an aerospace company with ambitious throughput goals in response to increased commercial space exploration. Pressure to increase throughput is especially apparent within its BE-4 engine business, as the engines support Blue Origin and its customers. Blue Castings is one of the primary in-house manufacturing plants that supports BE-4 production; the plant manufactures rocket engine components through a process called investment casting. Investment casting, by nature, is a complex process involving long rework times, high incidence of defects, and significant process variability. These characteristics contribute to the discrepancies between Blue Origin’s target BE-4 production rate, the production rate feasible at Blue Castings, and its actual delivery rate. This thesis explores how defect management and prevention techniques can improve throughput at Blue Castings and reduce the number of Blue Origin’s schedule delays attributable to Blue Castings. The research began with a baseline investigation and analysis of Blue Castings’ actual and best-case throughput rates compared to its goal. Two gaps were identified: 1) a gap between actual and feasible throughput, and 2) a gap between feasible and target throughput. The analyses highlight the need for better process and quality management to close both gaps. Through a mixed-method approach, the researcher explored and piloted process and data improvements to understand their impact on throughput. This included qualitative and quantitative data collection through on-site interviews, random sampling of defect data, and queries from the manufacturing execution system. With this data, the researcher investigated how machine learning can predict rework severity and support defect prevention. A case study on a selected part number demonstrated the potential to improve throughput by reducing unnecessary rework. By aligning stock-on surface criteria to downstream machining requirements, average rework loops were reduced from thrice the industry benchmark to below the benchmark. This increased capacity at the rework work center and improved the overall delivery of this part. The research also demonstrated how a cross-functional collaboration to formalize producibility lessons reduces the creation of defects, promotes systematic knowledge-sharing, and accelerates improvements similar to the stock-on surface case study. In parallel, this research evaluated how Blue Castings could improve defect documentation and tracking without causing significant additional effort for operators. The researcher’s findings highlight the limitations of handwritten weld maps and inconsistent data capture practices on effectively preventing defects. Digitization of defect tracking is recommended to enable consistent defect data collection and improved root cause and trend analyses. As data quality improves, applying classification ML models for predictive analytics can scale throughput. This work provides recommendations for Blue Castings to implement mechanisms that reduce rework, improve producibility, and increase throughput to align with Blue Origin’s goals.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives</title>
<link href="https://hdl.handle.net/1721.1/163276" rel="alternate"/>
<author>
<name>Kaashoek, Justin H.</name>
</author>
<id>https://hdl.handle.net/1721.1/163276</id>
<updated>2025-10-22T03:33:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Searching with Intuition: Exploring (the bounds of) LLM-guided Search with Unknown Objectives
Kaashoek, Justin H.
Large language models (LLMs) can perform a wide range of search and optimization tasks over discrete spaces. This work seeks to explore the limits of LLM-guided search. We construct a set of text optimization tasks with different levels of "intuitiveness'' and evaluate whether LLMs can effectively optimize objectives. We show that the LLM's performance depends not only on its intuition for the objective, but also on the alignment between the objective and its priors. We also find that the LLM can successfully optimize an objective even without an explicit description of the objective. Our results largely focus on greedy search strategies; we develop a theoretical characterization of conditions under which greedy search is optimal, meaning the LLM's failures result from a fundamental inability to take gradient-like steps, not suboptimal search.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems</title>
<link href="https://hdl.handle.net/1721.1/163273" rel="alternate"/>
<author>
<name>Harjono, Hanna-Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/163273</id>
<updated>2025-10-22T03:33:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development of a Throttleable Attitude Control Scheme for Electrospray Propulsion Systems
Harjono, Hanna-Lee
Electrospray thrusters have emerged as highly promising propulsion options for small satellites due to their compact size, low weight, and power requirements. These thrusters offer precise, efficient, and scalable attitude control, making them ideal for missions requiring fine adjustments and advanced capabilities such as formation flying and docking maneuvers. However, to fully exploit the potential of electrospray thrusters, control strategies specific to them must be developed. In this work, a parameterized, PID gain-scheduled attitude controller that leverages the unique throttleability of electrospray thrusters is developed and validated. The developed controller is adaptable across operating conditions, as well as electrospray thrust coefficient values. Extensive modeling efforts are undertaken to incorporate the throttleability and operational constraints of electrospray thrusters, ensuring accurate performance predictions. The control system is simulated under various operating conditions to assess and verify its functionality and robustness against disturbance torques. Validation experiments in a magnetic levitation CubeSat testbed are proposed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen</title>
<link href="https://hdl.handle.net/1721.1/163271" rel="alternate"/>
<author>
<name>Goel, Viraat Yogi</name>
</author>
<id>https://hdl.handle.net/1721.1/163271</id>
<updated>2025-10-22T03:33:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulation Modeling of Drug Substance Tech Transfer Timelines at Amgen
Goel, Viraat Yogi
Technology transfer (TT), or the process by which a product's manufacturing is moved and scaled, is a complex business process with countless deliverables and stakeholders. This is especially true in biomanufacturing, where drug commercialization timelines are measured in years, manufacturing facilities are specially designed, and regulations must be stringently met. This systems-level complexity can create inefficiencies in the TT process, lengthening timelines and wasting resources. In this project, we use simulation modeling techniques to digitally model Amgen's Commercial Tech Transfer (CTT) process for biologic drugs. We use virtual experimentation to identify key bottlenecks in the TT workflow, quantify how workstream alterations impact project timelines, and identify process changes likely to shorten timelines. We also extend this analysis to Amgen's New Product Introduction (NPI) process, identifying how coordination between upstream and downstream processes may accelerate NPI timelines. Finally, we link this project to the ongoing development of TT data visualization dashboards.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety Stock Modeling for a Medical Devices Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163270" rel="alternate"/>
<author>
<name>Chong, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/163270</id>
<updated>2025-10-22T03:33:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Safety Stock Modeling for a Medical Devices Supply Chain
Chong, Julie
This thesis examines the current inventory management practices at a leading manufacturer of medical devices, and identifies areas for significant improvement. The analysis reveals inefficiencies in safety stock management, with finished goods inventories being excessively high and raw material stocks being underestimated. The study applies single-echelon and multi-echelon inventory modeling to demonstrate potential cost savings through optimized safety stock levels. Additionally, it highlights the importance of reevaluating high service level targets and improving forecasting accuracy to reduce reliance on costly countermeasures. The thesis also emphasizes the need for effective management of component lead times and enhanced data visibility. Recommendations include transitioning to data-driven safety stock calculations, adopting multi-echelon inventory optimization, reassessing service level targets, enhancing forecasting accuracy, and improving component lead time management. By implementing these strategies, the company can enhance operational efficiency, reduce costs, and build greater resilience in its supply chain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding home broadband coverage through existing Low Earth Orbit megaconstellations</title>
<link href="https://hdl.handle.net/1721.1/163269" rel="alternate"/>
<author>
<name>Gonzalez Martinez, Gretel</name>
</author>
<id>https://hdl.handle.net/1721.1/163269</id>
<updated>2025-10-22T03:33:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expanding home broadband coverage through existing Low Earth Orbit megaconstellations
Gonzalez Martinez, Gretel
Expanding broadband access to underserved areas continues to be a significant challenge for Internet Service Providers (ISPs). While its services perform well in high-density regions, they face scalability limitations in sparsely populated areas where infrastructure costs must be spread across a smaller customer base. This study explores the potential of Low Earth Orbit (LEO) satellite megaconstellations as a scalable solution for extending broadband coverage in the United States. By analyzing the technical capabilities, deployment timelines, and economic feasibility of partnering with LEO satellite providers, this research offers a strategic framework for integrating satellite broadband into ISPs service portfolio.&#13;
&#13;
A customer demand model identifies approximately 17 million unserved households within the addressable market of one of the largest U.S. telecommunications companies. The business case assessment evaluates broadband profitability by optimizing customer base size relative to proximity to existing infrastructure. While fiber optics remains the most profitable solution in high-density areas and fixed wireless access effectively utilizes excess 5G capacity, both require substantial infrastructure investment, limiting their feasibility for rural broadband expansion. In contrast, a satellite broadband partnership emerges as the most cost-effective solution for at least 1 million households, surpassing the profitability of currently existing offerings. With minimal capital investment, satellite technology enables rapid customer acquisition and scalable nationwide expansion. The analysis highlights that wholesale agreements play a critical role in profitability and the need to secure a minimum revenue share of 16.5% to reach the break-even point.&#13;
&#13;
Performance modeling and curve approximation techniques estimate that if Kuiper meets Federal Communications Commission (FCC) deployment milestones, it could serve 8.5 million customers by 2026, with full nationwide coverage projected by 2029. Under a 200x oversubscription model, Kuiper’s total subscriber capacity could scale to 32.8 million, demonstrating its ability to complement current broadband o!erings. While LEO broadband networks can achieve capacities in the tens of Tbps, they remain far below fiber networks, which operate in the thousands of Tbps. Rather than competing directly, satellite broadband is positioned as a complementary solution, addressing connectivity gaps in rural and underserved&#13;
regions.&#13;
&#13;
To capitalize on these findings, this study recommends leveraging existing LEO megaconstellations to expand broadband coverage nationwide. A phased rollout should begin with a beta program in California, the state with the highest number of unserved households, to validate network performance and optimize deployment for broader expansion. Partnering with an&#13;
existing LEO megaconstellation could e!ectively bridge the digital divide in rural areas, expand service offerings, and enable a stronger position in the growing satellite broadband market.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives</title>
<link href="https://hdl.handle.net/1721.1/163268" rel="alternate"/>
<author>
<name>Kim, Jason Gwanhee</name>
</author>
<id>https://hdl.handle.net/1721.1/163268</id>
<updated>2025-10-22T03:33:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Economic Determinants of Increased Use of Performance-Vesting Provisions in CEO Incentives
Kim, Jason Gwanhee
This study examines the determinants of firms adopting performance-vesting long-term incentive (PLI) awards, a rapidly growing form of executive compensation. Using data provided by Equilar on Russell 3000 firms, I investigate how a firm's contracting environment and inter-firm networks influence the adoption and design of PLI awards. I find that stock liquidity and analyst coverage significantly increase the likelihood of adoption by enhancing the informativeness of performance measures. The findings suggest that firms adopt PLI awards to better align managerial incentives with shareholder interests, focusing on the measures that are both reliable and strategically aligned. I also show that board interlocks, particularly those involving compensation committee members, and shared compensation consultants play a significant role in facilitating the diffusion of PLI awards across firms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Business Value of Enterprise Digital Architecture</title>
<link href="https://hdl.handle.net/1721.1/163263" rel="alternate"/>
<author>
<name>Venkata Aditya, Saraswatula (Adi SV)</name>
</author>
<id>https://hdl.handle.net/1721.1/163263</id>
<updated>2025-10-22T03:33:56Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Business Value of Enterprise Digital Architecture
Venkata Aditya, Saraswatula (Adi SV)
Digital technologies are fundamentally reshaping markets and organizations globally. This thesis is exploratory research that seeks to explain how large multi-regional and global enterprises determine, prioritize, measure, and manage business value outcomes of digital investments over time. I examine the value construct of digital initiatives in firms from different industries by interviewing various stakeholders. Insights surfaced from this primary research are analyzed in conjunction with the concepts from current literature. Qualitative findings are proposed, and a list of value metrics is presented that can serve as a future reference for firms. A causal loop diagram is proposed to visualize firm capabilities and value dynamics.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/163261" rel="alternate"/>
<author>
<name>Oludipe, Lanre</name>
</author>
<id>https://hdl.handle.net/1721.1/163261</id>
<updated>2025-10-22T03:33:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Inventory Rebalancing: Strategies for Managing Excess Inventory in a Dynamic Supply Chain
Oludipe, Lanre
The increasing demand for faster consumer delivery has led retailers to establish smaller regional distribution centers alongside traditional main distribution centers (MDCs). However, the limited capacity of some of these regional centers heightens the need for precise inventory forecasting and deployment to minimize excess inventory, particularly when few viable outlets exist for excess inventory. This research examines strategies to mitigate excess inventory at regional centers through inventory rebalancing, the integration of additional outlets, and modifications to existing inventory policies. A Monte Carlo simulation was conducted to compare the performance of the current system with a modified system incorporating these enhancements. The results showed that the modified system improved capacity utilization and reduced inventory deployment from the MDC without affecting margin. These improvements can enable more agile operations at smaller regional centers, reduce inventory buildup, and reduce the pressure of precise inventory deployment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications</title>
<link href="https://hdl.handle.net/1721.1/163260" rel="alternate"/>
<author>
<name>Knapp, Rachael</name>
</author>
<id>https://hdl.handle.net/1721.1/163260</id>
<updated>2025-10-22T03:33:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Electric Vehicle Fleet Charging: A Simulation-Based Comparison of Charging Strategies and Cost Implications
Knapp, Rachael
The global shift to electric vehicles (EVs) is progressing rapidly, driven by the need to reduce greenhouse gas (GHG) emissions and global reliance on fossil fuels. However, fleet electrification presents unique challenges, particularly in regard to rolling out the necessary charging infrastructure and operational efficiency. This study examines how various depot-based fleet charging strategies impact up-front capital and long-term operational expenditures. The operational feasibility of each method is evaluated through the use of a discrete event simulation. The study incorporates fleet data to assess the time required to charge the fleet, the number of chargers needed, and the number of associates needed to operate manual strategies. The analyzed charging methods include dedicated level 2 charging, vehicle swapping, level 2 cable swapping, level 3 cable swapping, sequential and simultaneous charging. Key findings indicate that while a 1:1 vehicle-to-charger ratio ensures charging reliability within the designated time, it incurs the highest capital costs. Alternative strategies, such as cable swapping and simultaneous charging, significantly reduce costs while successfully charging the fleet within the charging window.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs</title>
<link href="https://hdl.handle.net/1721.1/163259" rel="alternate"/>
<author>
<name>Kasliwal, Mohit</name>
</author>
<id>https://hdl.handle.net/1721.1/163259</id>
<updated>2025-10-22T03:33:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Integrated Optimization Model for Large-Scale EV Fleet Deployment: Balancing Emissions Reduction and Operational Costs
Kasliwal, Mohit
This thesis presents an integrated optimization framework designed for the large-scale deployment of electric vehicles (EVs) within commercial fleets, specifically focusing on balancing emissions reduction and operational cost efficiencies. Utilizing Verizon’s extensive fleet of over 10,000 light-duty vehicles across 1,000 sites as a case study, the research addresses the challenges and complexities in effective site selections for such a large and dispersed fleet. &#13;
The research involved developing and testing several optimization models under varying scenarios, including scenarios prioritizing maximum operational savings, maximum emissions reduction, and a hybrid model employing an internal cost of carbon (ICC) to balance both operational and environmental objectives. The model essentially develops a ranking system for sites – suggesting which sites to electrify in which year and order, with how many EV conversions (from existing ICE vehicles) at each site.&#13;
The results highlight the importance of tailoring EV deployment strategies to site-specific conditions, such as unique vehicle usage patterns, grid emissions profiles, regional operational costs, and available incentives. Particularly, smaller sites were found to offer greater relative benefits in terms of both cost savings and emissions reductions per unit of capital invested due to their high average mileage, making them strategic priorities for early electrification.&#13;
Operational feasibility was also thoroughly examined, recommending practical constraints such as limiting the number of sites electrified annually to ensure project manageability and effectiveness. &#13;
Sensitivity analyses addressed critical uncertainties such as battery degradation over the vehicle lifespan and the impact of extreme weather on EV performance. These analyses underscore the necessity of conservative battery range buffers ("safe ranges"). Robust load management strategies can be deployed to significantly reduce demand charges and optimize charging schedules based on time-of-use rates where available.&#13;
Recommendations from the study advocate for implementing a hybrid optimization strategy incorporating an ICC based on corporate goals, continuous adaptive management informed by ongoing data collection, and strategic infrastructure investments to future-proof EV deployments. Policy alignment is also critical to enhance economic viability via incentives and ensure regulatory compliance.&#13;
Finally, the thesis proposes future research directions, including investigation of advanced load management and integration with renewable energy sources, exploring bi-directional charging to add revenue streams, incorporating marginal operating emissions rate (MOER) data to further reduce grid emissions and exploring the resilience of EV fleets to power outages. These initiatives aim to further enhance strategic decision-making and ensure the long-term sustainability and efficiency of fleet electrification programs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Chain: Building Resilience in the Insurance Value Chain</title>
<link href="https://hdl.handle.net/1721.1/163258" rel="alternate"/>
<author>
<name>Chuah, Chung Jin</name>
</author>
<id>https://hdl.handle.net/1721.1/163258</id>
<updated>2025-10-22T03:33:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Breaking the Chain: Building Resilience in the Insurance Value Chain
Chuah, Chung Jin
This thesis examines how strategic transformation approaches reshape the resilience of the Property &amp; Casualty (P&amp;C) insurance industry in the light of ongoing technological disruption, climate change, and regulatory pressures. Through empirical analysis of 9 insurers, the study reveals that while all transformation types improve performance, phased 'test-refine-execute' strategies achieve superior outcomes by combining operational focus with strategic agility. The research identifies four implementation levers: (i) digital modernization, (ii) phased transformation execution, (iii) resource-allocation agility, and (iv) aligned leadership - which together explain why some transformations succeed where others fail."
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain Adaptation of VLM for Soccer Video Understanding</title>
<link href="https://hdl.handle.net/1721.1/163257" rel="alternate"/>
<author>
<name>Jiang, Tiancheng(Tony)</name>
</author>
<id>https://hdl.handle.net/1721.1/163257</id>
<updated>2025-10-22T03:33:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Domain Adaptation of VLM for Soccer Video Understanding
Jiang, Tiancheng(Tony)
Vision Language Models (VLMs) have demonstrated strong performance in multi-modal tasks by effectively aligning visual and textual representations. However, most video under- standing VLM research has been domain-agnostic, leaving the understanding of their transfer learning capability to specialized domains underexplored. In this work, we address this by exploring the adaptability of open-source VLMs to specific domains, and focusing on soccer as an initial case study. Our approach uses large-scale soccer datasets and LLM to create instruction-following data, and use them to iteratively fine-tune the general-domain VLM in a curriculum learning fashion (first teaching the model key soccer concepts to then question answering tasks). The final adapted model, trained using a curated dataset of 20k video clips, exhibits significant improvement in soccer-specific tasks compared to the base model, with a 37.5% relative improvement for the visual question-answering task and an accuracy improvement from 11.8% to 63.5% for the downstream soccer action classification task.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization</title>
<link href="https://hdl.handle.net/1721.1/163256" rel="alternate"/>
<author>
<name>Garber, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/163256</id>
<updated>2025-10-22T03:33:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimizing Cost of Intra-Yard Finished Vehicle Logistics Through Automation and Optimization
Garber, Jeremy
This thesis analyzes and validates autonomous Finished Vehicle Logistics (FVLa) operations, at the plant of an automative Original Equipment Manufacturer (OEM), through the development of a Vehicle-Plug-In (VPI) system with Level 4 autonomous driving capabilities. The research combines process flow analysis with FlexSim simulation modeling to optimize operational parameters and assess safety performance. Results demonstrate FVLa operational feasibility with a recommended VPI inventory of 750 units and 6-hour replenishment cycle. The study's key contributions include a validated operational model using Economic Order Quantity calculations and a safety framework utilizing Bayesian Networks, establishing foundations for the planned 2028 implementation while maintaining required throughput rates and safety standards.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonized Cement Manufacturing via Advanced Production Technologies</title>
<link href="https://hdl.handle.net/1721.1/163255" rel="alternate"/>
<author>
<name>Norwalk, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/163255</id>
<updated>2025-10-22T03:33:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonized Cement Manufacturing via Advanced Production Technologies
Norwalk, Michael
Cement production is the second-largest source of industrial carbon dioxide emissions world-wide. Due to the chemical reactions inherent in its production and the temperatures required to drive those reactions, cement is considered a “hard-to-decarbonize” industry. In this study, three emerging technologies to reduce the carbon intensity of industrial processes, namely, direct high-temperature electric process heat, electric process heat utilizing thermal storage, and liquid amine-based carbon capture are assessed in the context of a greenfield cement production facility relative to a new-build conventional cement plant fueled with natural gas. Cement plants utilizing this set of technologies were modeled in five U.S. geographies to determine the relative economic returns. The economics were assessed, inclusive of available economic incentives, both for the scenario in which the cement produced is sold in the U.S. market and for the scenario in which the cement produced is exported to the European Union (E.U.) market to assess potential benefits from the E.U. carbon pricing system. The analysis indicates that at current technology prices, the economic returns of the assessed technologies, while in some cases profitable, continue to lag those of conventional production technology for the domestic U.S. market. As costs come down as technology is deployed, the economics of carbon capture solutions have the potential to be competitive with conventional technology. The E.U. carbon emissions penalties are effective in altering the economics in such a way that implementing carbon capture systems becomes the most attractive economic option, demonstrating the power of carbon emissions markets. With increased technology deployment as well as the adoption of targeted incentives in the U.S. market, the adoption of low carbon cement production technologies can be accelerated.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Polarity Ion Electrospray Propulsion</title>
<link href="https://hdl.handle.net/1721.1/163253" rel="alternate"/>
<author>
<name>Shaik, Saba Zareen</name>
</author>
<id>https://hdl.handle.net/1721.1/163253</id>
<updated>2025-10-22T03:33:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Single-Polarity Ion Electrospray Propulsion
Shaik, Saba Zareen
Electrospray thrusters are highly efficient spacecraft propulsion devices that accelerate ions sourced from ionic liquid propellants to produce thrust. Typically, electrosprays are fired in a dual-polarity configuration in which the polarity of the ion beam is periodically reversed. This strategy is difficult to implement and imposes limitations on system size and performance. We instead propose a single-polarity design where negative ions are emitted continuously from the thruster, enabling extreme miniaturization, faster startup, better emission stability, and simpler power processing. This thesis investigates two challenges associated with the single-polarity design. First, system lifetime is of principal importance for electrospray propulsion systems in general and must be verified for a single-polarity implementation. Long-duration electrospray tests are performed, demonstrating that single polarity thrusters achieve comparable lifetimes and performance to state of the art systems with high mass utilization and minimal hardware degradation. An additional challenge is propellant electrochemistry, triggered when positive counterions accumulate in the ionic liquid. A suite of experiments is conducted to identify and characterize electrochemical processes, including electrical double-layer potential evolution and gas-phase product formation, in electrospray thrusters over long firing durations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China</title>
<link href="https://hdl.handle.net/1721.1/163252" rel="alternate"/>
<author>
<name>Zhang, Hanxue</name>
</author>
<id>https://hdl.handle.net/1721.1/163252</id>
<updated>2025-10-22T03:33:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Comparative Analysis of Semiconductor Investment Environments&#13;
in the U.S. and China
Zhang, Hanxue
Semiconductors are fundamental to Artificial Intelligence(AI) and central to global technological competition. Against this backdrop, this thesis compares semiconductor primary investment environments in the United States and China, examining their implications for industry development and innovation. The study employs a mixed-methods approach, combining expert interviews, data analysis, and natural language processing (NLP). It draws on primary market investment, M&amp;A deals and government grants data to examine capital structures, investment stages, sectoral focus, and exit efficiency. Furthermore, it analyzes nearly 3,000 semiconductor industry reports(2020-2025) to identify evolving strategic priorities and thematic trends shaping these environments. Findings reveal that China’s state-led, vertically integrated model prioritizes upstream capacity building and supply chain autonomy, supported by government guidance funds, private capital, and policy-driven mechanisms. However, there remains a significant gap in leading-edge chips, necessitating precise investments and patient capital to bridge this divide. While the U.S. ecosystem, shaped by major technology firms and federal support, focuses on design innovation and cutting-edge technologies. However, structural constraints such as limited exit pathways, fragmented fabrication capacity, and insufficient industrial policies may hinder the U.S. in nurturing innovation-driven small and medium-sized enterprises (SMEs) in the semiconductor industry. This thesis highlights the structural divergence between the U.S. and China’s semiconductor ecosystems by examining policy, primary market capital, and investment dynamics. It offers policymakers and investors a strategic overview of how these forces shape innovation and resilience, while identifying emerging investment priorities and future development paths.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Automotive Production Volume Using Regression and Time Series Modelling</title>
<link href="https://hdl.handle.net/1721.1/163251" rel="alternate"/>
<author>
<name>Gong, Yutao</name>
</author>
<id>https://hdl.handle.net/1721.1/163251</id>
<updated>2025-10-22T03:33:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Forecasting Automotive Production Volume Using Regression and Time Series Modelling
Gong, Yutao
Accurate forecasting of automotive production volumes is a critical capability for suppliers navigating an increasingly volatile industry. Overly optimistic forecasts, particularly from Original Equipment Manufacturers (OEMs), lead to misallocated capacity and lost opportunities across the supply chain. This thesis investigates whether advanced statistical models can improve upon benchmark industry forecasts and provide automotive suppliers with more reliable, practical tools for demand planning. Several forecasting methodologies are evaluated, including ARIMA, standard linear regression, Lasso regression, Theta model, and a hybrid Boosted Theta model. Models are tested across North America, Europe, and Greater China using 2000-2024 vehicle production and macroeconomic data. Results show that Theta model outperforms industry forecasts across both 1-year and 5-year horizons in North America and Europe. Its simplicity, low data requirements, and robustness to market volatility make it suitable for industrial use. The model was successfully implemented at Commonwealth Rolled Products, an aluminum rolling mill in Kentucky, portfolio company of American Industrial Partners (AIP), where it was adopted for 2025 planning and drove a shift towards data-centric forecasting practices. This research presents a real-world example of applying academic techniques to solving actual business problems, serving as a valuable reference for suppliers seeking to improve forecast accuracy and operational planning in the evolving automotive landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The role of university venture funds in supporting early-stage Japanese startups</title>
<link href="https://hdl.handle.net/1721.1/163250" rel="alternate"/>
<author>
<name>Brillaud, Nami</name>
</author>
<id>https://hdl.handle.net/1721.1/163250</id>
<updated>2025-10-22T03:33:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The role of university venture funds in supporting early-stage Japanese startups
Brillaud, Nami
This thesis explores how university venture funds in Japan are uniquely positioned to turn the country’s innovation capacity into entrepreneurial capacity by supporting early-stage startups. While Japan consistently ranks high in research output, much of this potential is not being translated into successful entrepreneurship. Risk capital is scarce compared to other ecosystems, particularly for deep tech, and support systems for early-stage startups are still limited. University venture funds – which inherently connect universities, entrepreneurs, and risk capital – are well positioned to bridge this gap. Yet despite their growing relevance, their evolving role in supporting Japanese early-stage startups is understudied.&#13;
&#13;
This study compares university venture funds with different profiles – ranging from leading and longstanding funds like UTEC, to public-private venture funds established through government initiatives, to recent funds with diversified structures – analyzing how they are structured, how they invest, and what results they have seen so far. It then builds on startup examples and interviews with university venture funds to identify how these funds can better support early-stage startups through improved fund operations, stronger pre-seed support, as well as a strategic approach to growth and exits. Ultimately, this thesis advocates for actionable solutions informed by global practices but adapted to Japan’s unique startup ecosystem.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Procurement Data for Cost Saving Application</title>
<link href="https://hdl.handle.net/1721.1/163249" rel="alternate"/>
<author>
<name>Pan, Haoting</name>
</author>
<id>https://hdl.handle.net/1721.1/163249</id>
<updated>2025-10-22T03:33:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Procurement Data for Cost Saving Application
Pan, Haoting
In an increasingly data-driven business environment, procurement analytics plays a critical role in optimizing costs and improving supply chain efficiency. This thesis examines the development and implementation of the Lifecycle Cost Management (LCM) tool at Caterpillar Inc., a global leader in heavy equipment manufacturing. Given Caterpillar's decentralized procurement structure, managing cost-saving initiatives across its 150 facilities (Caterpillar | Caterpillar Frequently Asked Questions (FAQs), n.d.) and 28,000 suppliers (Caterpillar | Caterpillar at a Glance, n.d.) poses a significant challenge. The LCM tool leverages machine learning models to identify overpriced purchase orders (POs) and generate actionable cost-saving opportunities.&#13;
This study explores the methodology used to enhance LCM's predictive capabilities, including data sourcing and cleaning, feature engineering, model selection, and validation. Various regression models, clustering techniques, and machine learning algorithms, such as Random Forest and XGBoost, are tested to identify cost outliers. A validation process is implemented to ensure that flagged outliers are cost-saving opportunities appropriate for execution.&#13;
Beyond technical development, the thesis addresses the processes of digital tool adoption within Caterpillar’s procurement teams. A change management approach is employed, incorporating buyer interviews, stakeholder engagement, and iterative user experience (UX) improvements. Through case studies, the study highlights the machine learning model performance and tangible financial impact of LCM. &#13;
The LCM tool has identified more than $100M data-driven potential savings, and hopes to realize 20% of the savings. Because Caterpillar’s procurement contracts are often long-term, these savings can be considered perpetual. &#13;
Findings indicate that while machine learning models effectively identify cost outliers, their success is contingent on robust data governance, stakeholder buy-in, and integration into procurement workflows. The study underscores the importance of data management, organizational alignment, and continuous refinement of digital procurement tools. Future work recommendations are enhancing data infrastructure, integrating AI-driven contract management and analysis, and refining cost estimation methodologies. The insights gained contribute to the broader application of procurement analytics and digital transformation in manufacturing enterprises.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment</title>
<link href="https://hdl.handle.net/1721.1/163248" rel="alternate"/>
<author>
<name>DiDio, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/163248</id>
<updated>2025-10-22T03:33:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact Evaluation and Prioritization Framework for Manufacturing Inspection Technology Investment
DiDio, Isabella
Advancements in visual inspection technologies and machine learning algorithms present Johnson &amp; Johnson Vision with an opportunity to enhance quality control for Acuvue contact lenses, addressing inefficiencies such as unnecessary scrap, customer complaints, and lead time variability. With over 5 billion lenses produced annually across 100 manufacturing lines, the proposed inspection implementation of advanced camera optics and machine learning aims to improve defect detection accuracy, minimize manual inspection, and reduce customer complaints.&#13;
An impact evaluation and prioritization framework was developed to strategically implement these upgrades across 100 manufacturing lines, integrating historical data analysis, financial modeling, and engineering risk assessments. Key findings highlight that complaint reduction, scrap savings, and labor cost reductions are the primary drivers of cost savings, with inventory savings offering incremental benefits over time.&#13;
In conclusion, this research demonstrates the process of integrating advanced technologies into manufacturing processes. By aligning engineering solutions with strategic business objectives, the findings provide actionable insights for managing large-scale technological upgrades across global networks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007</title>
<link href="https://hdl.handle.net/1721.1/163246" rel="alternate"/>
<author>
<name>Tan, Yi-Ern Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163246</id>
<updated>2025-10-22T03:33:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Technical Discourse of Miyake Design Studio: Episode in the Interpretation of Cloth, 1995-2007
Tan, Yi-Ern Samuel
In the late 1980s, Miyake Design Studio began to register patents concerning the Studio’s development of novel techniques to process pleated clothing. Their first patent, filed in 1989, was registered in designer Issey Miyake’s name, detailing the use of an industrial machine to pleat an entire garment after sewing, reversing the order of the conventional approach to creating pleated garments. In the years that followed, this entry into what I term “technical discourse” would proliferate with the Studio’s establishing of the PLEATS PLEASE brand specializing in pleated garments, and the A-POC (“a piece of cloth”) project with designer and textile engineer Fujiwara Dai. Each of these projects produced numerous patents, including a period between 1997 and 2008 I call the “Miyake Patent Explosion” when the Studio filed twenty patents with the Japan Patent Office and its international counterparts.&#13;
&#13;
In contrast to aesthetic discourses proposing the value of a work on its artistic merits and intellectual content, technical discourse points to the profusion of texts produced and circulated by the Studio—in this thesis, patents and legal claims—to uphold the utility of their products and their protection as intellectual property. By engaging with technical discourse, Miyake Design Studio were not only creating legal safeguards around the ideas it considered proprietary. Rather, their extensive production of technical discourse positioned Miyake as a figure who exceeded the boundaries of fashion, approaching its adjacent categories of unhyphenated design, architecture, and art within whose circles his objects circulate as currency.&#13;
&#13;
Exploring these texts as they are deployed in the defense of intellectual property, I argue that technical discourse can be treated as a form of historical archive that allows us to historicize claims to technological inheritance that bear upon the discussion of Miyake’s work. Specifically, I look to patents as a citational practice, or as Alain Pottage and Brad Sherman write, a “chain of reference” through which patent lawyers and engineers make deliberate connections between one technology and another to acknowledge, distinguish, and legitimize. Examining three episodes where technical discourse opens the way for historical narrative—a lawsuit over imitation goods, a case of mistaken identity in design criticism, and a moment of technological dissolution—I argue that we cannot divorce Miyake and his work from the technical complex that surrounds the Studio’s production of objects. Turning to these technical discourses that exist in the public record, I suspend the promise of monographic history that peers into the mind of the individual and probe instead the possibilities of seeing agencies beyond those attributed to the authorial figure of Miyake— his corporate apparatus, his allies, his admirers, his critics, his opponents, the receptive public.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of consolidation and plastic resistance on clays</title>
<link href="https://hdl.handle.net/1721.1/163105" rel="alternate"/>
<author>
<name>Marsal, Raúl J.</name>
</author>
<id>https://hdl.handle.net/1721.1/163105</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1944-01-01T00:00:00Z</published>
<summary type="text">Investigation of consolidation and plastic resistance on clays
Marsal, Raúl J.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1944; Vita. Appendix contains numerous pamphlets.
</summary>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An optical instrument for the synthesis of sound</title>
<link href="https://hdl.handle.net/1721.1/163102" rel="alternate"/>
<author>
<name>Brown, Sherwood Fiske.</name>
</author>
<id>https://hdl.handle.net/1721.1/163102</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1930-01-01T00:00:00Z</published>
<summary type="text">An optical instrument for the synthesis of sound
Brown, Sherwood Fiske.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1930
</summary>
<dc:date>1930-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study</title>
<link href="https://hdl.handle.net/1721.1/163099" rel="alternate"/>
<author>
<name>Goody, Marvin E.</name>
</author>
<id>https://hdl.handle.net/1721.1/163099</id>
<updated>2025-10-10T03:04:41Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">A spiritual and cultural synagogue center for modern American Jewry : an architectural and sociological study
Goody, Marvin E.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1951; "A thesis submitted in partial fulfillment of the requirements for the degree of Master in Architecture, Massachusetts Institute of Technology, August 22, 1951."; Includes bibliographical references (leaves 93-95).
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Operational Value Stream Analysis for Developmental Excellence</title>
<link href="https://hdl.handle.net/1721.1/163055" rel="alternate"/>
<author>
<name>Shaw, Eric T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163055</id>
<updated>2025-10-07T04:14:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Operational Value Stream Analysis for Developmental Excellence
Shaw, Eric T.
The aerospace and defense industry faces increasing challenges in new product development, where financial constraints and risk aversion hinder innovation. Using a multidisciplinary approach that integrates contract theory, computational fluid dynamics (CFD), and machine learning, this research explores the impacts of engineering requirements, financial alignment among stakeholders, and improved efficiencies in predictive modeling techniques for two separate air vehicle programs: A and B. A Monte Carlo analysis using SEER-H estimation software quantifies the financial and schedule impacts of engineering requirements, revealing a 10–30% cost increase due to volatility in air vehicle development design parameters. Moreover, a game-theoretic contract negotiation simulation illustrates the importance and opportunity of financial incentive alignment among key stakeholders. Additionally, predictive analytics leveraging machine learning models better capture the relevant flow mechanics, improving the circumferential distortion estimations in nacelle aerodynamics by over 10% compared to traditional heuristics. Finally, a CFD-based actuator disk source modeling approach demonstrates a 60% reduction in steady-state distortion at some portions of the flight envelope, due to the impact of the fan upstream influence on inlet flow distortion suggesting increased operational capability for the air vehicle program B. This research provides actionable recommendations to enhance the operational value stream of new air vehicle program development, emphasizing the need for pre-RFP requirements validation, advanced machine learning applications for predictive engineering, and refined CFD modeling to identify technical risks earlier in the design process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow</title>
<link href="https://hdl.handle.net/1721.1/163054" rel="alternate"/>
<author>
<name>Sonandres, Jake T.</name>
</author>
<id>https://hdl.handle.net/1721.1/163054</id>
<updated>2025-10-07T04:15:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Computational Framework for Simulating Entanglement-Based Drone Countermeasures with Flexible Filaments Immersed in Viscous Flow
Sonandres, Jake T.
In this work, we present a computational framework for modeling the coupled dynamic interactions of highly flexible slender filaments immersed in a viscous flow and their entanglement with themselves and moving structures. This work is motivated by a novel drone countermeasure that entangles propellers with flexible filament clouds, inducing a loss of thrust and control authority in the drone. However, the framework is relevant to a wider range of applications, including actin filaments in cell biology, carbon nanotubes in composite materials, and rope-like structures in industrial settings. Each filament is modeled with the three-dimensional geometrically exact Kirchhoff-Love torsion-free finite element beam formulation. The fluid flow resulting from filament aerodynamic interaction is described through a Boundary Integral (BI) formulation of the incompressible Stokes equations based on the Stokeslet discretization. The heavy computational load of the resulting dense system is addressed through the use of fast GPU-based dense linear solvers. The BI formulation is coupled to the filament solid mechanics by enforcing momentum balance at the dynamically evolving filament-fluid interface. Additionally, the solid contact interactions between filaments are modeled with a point-to-point frictional contact algorithm that applies discrete contact and frictional forces at the closest point between the beam elements. We address the difficulties associated with contact between elements represented with third-order Hermitian polynomial shape functions and the strategies adopted to overcome these challenges. To capture propeller fouling for drone countermeasures, we incorporate a propeller and motor model whose thrust and torque responses are affected by contact interactions during entanglement. We verify our framework against simple analytical solutions and demonstrate its capabilities with numerical examples that attempt to capture large-scale filament entanglement behavior. In particular, we apply our methodology to demonstrate the process by which filament entanglement can restrict motion and reduce the efficacy of propellers. The results show that the framework can be used to understand the connection between filament entanglement, key system properties, and the resulting thrust generated by the propeller.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach</title>
<link href="https://hdl.handle.net/1721.1/163053" rel="alternate"/>
<author>
<name>Martin, Estelle Claude Aline</name>
</author>
<id>https://hdl.handle.net/1721.1/163053</id>
<updated>2025-10-07T04:14:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Global sustainable aviation fuel production potential from&#13;
current agricultural production: a holistic data analytics&#13;
and systems analysis approach
Martin, Estelle Claude Aline
Aviation contributes significantly to global greenhouse gas emissions, driven primarily by its dependency on fossil-based jet fuel. Sustainable Aviation Fuel (SAF) offers a short-term option to mitigate these emissions. However, its current scalability remains limited, constrained by access to sustainable biomass. Realizing SAF’s potential in the near term, using the agricultural and industrial systems already in place requires a detailed understanding of biomass availability, resource competition, and the scalability of SAF production. This thesis presents a comprehensive system analysis framework and a data-driven methodology for evaluating SAF production potential based on current agricultural output, without assuming land expansion or major yield improvements and preserving food utilization. It evaluates the SAF production potential from increasing biomass availability by redirecting biomass currently used for some non-food purposes, and by utilizing processing and agricultural residue. In-depth analysis of four high-potential case studies, one for each main biomass family (starchy, sugary, oily, and fats and greases), was used to construct a detailed model of the supply chain. This structure was then applied globally across all countries and relevant feedstocks to estimate SAF production potential and associated system requirements.&#13;
&#13;
Findings from the case studies show that these four high-potential opportunities could collectively meet only up to 13.1% of global jet fuel demand in 2023, assuming 100% neat SAF. The global analysis estimates that the SAF production potential from the considered streams of increased biomass availability could meet up to about two-thirds of global jet fuel demand, with 28.7% derived from agricultural residues, 25.9% from redirected main products, and 12.5% from processing residues. These contributions hence remain insufficient to fully displace fossil jet fuel. This work provides an estimate of what could be achieved using the existing agricultural and industrial systems, what resource would be required, and how it compares to global resource availability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal inference for complex systems and applications to turbulent flows</title>
<link href="https://hdl.handle.net/1721.1/163052" rel="alternate"/>
<author>
<name>Sánchez, Álvaro Martínez</name>
</author>
<id>https://hdl.handle.net/1721.1/163052</id>
<updated>2025-10-07T04:14:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Causal inference for complex systems and applications to turbulent flows
Sánchez, Álvaro Martínez
Causality lies at the heart of scientific inquiry, serving as the fundamental basis for understanding interactions among variables in physical systems. Despite its central role, current methods for causal inference face significant challenges due to nonlinear dependencies, stochastic interactions, self-causation, collider effects, and influences from exogenous factors, among others. While existing methods can effectively address some of these challenges, no single approach has successfully integrated all these aspects. Here, we address these challenges with SURD: Synergistic-Unique-Redundant Decomposition of causality (Nat. Commun., vol. 15, 2024, p. 9296). SURD quantifies causality as the increments of redundant, unique, and synergistic information gained about future events from past observations. The formulation is non-intrusive and applicable to both computational and experimental investigations, even when samples are scarce. We benchmark SURD in scenarios that pose significant challenges for causal inference and demonstrate that it offers a more reliable quantification of causality compared to previous methods. We further illustrate the applicability of our approach in two turbulent-flow scenarios: the energy transfer across scales in the turbulent energy cascade and the interaction between motions across scales in a turbulent boundary layer. Our results show that, without accounting for redundant and synergistic effects, traditional approaches to causal inference may lead to incomplete or misleading conclusions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Theoretic Process Analysis of Sociotechnical Systems</title>
<link href="https://hdl.handle.net/1721.1/163051" rel="alternate"/>
<author>
<name>Harrington, Polly</name>
</author>
<id>https://hdl.handle.net/1721.1/163051</id>
<updated>2025-10-07T04:14:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Systems Theoretic Process Analysis of Sociotechnical Systems
Harrington, Polly
The safety and success of complex modern systems, such as hospitals, aircraft, or software, depend on their ability to integrate people and technical components. For example, doctors must be able to use their computerized surgical tools to treat their patients successfully, airplane pilots must be able to operate the required controls for takeoff and landing, and regulators must be able to interpret the data they receive to make critical decisions. However, designing systems that facilitate safe interactions between humans and technology is not a simple task. System designers must consider not only the constraints of the technical components but also human requirements throughout the entire system. However, accidents in modern systems continue to prove that more work is needed to identify and prevent unsafe interactions between humans and technology Systems Theoretic Process Analysis (STPA) is a hazard analysis methodology based on systems theory that has been used to improve system safety in various industries, including healthcare, aviation, nuclear power, and automotive design. However, if hazard analysts using STPA lack significant expertise in human factors engineering (HFE), they may be unable to thoroughly and rigorously identify critical unsafe interactions. This thesis presents a process for utilizing HFE to improve the results of STPA analyses on sociotechnical systems. In particular, the process focuses on the thorough identification of causal scenarios in sociotechnical systems by incorporating relevant human factors concepts. The process allows analysts without significant training in HFE to improve their ability to identify useful scenarios for humans in their system. The effectiveness of the improved process is demonstrated using a healthcare case study on over-the-counter clinical laboratory tests in the United States. By establishing a process for non-HFE experts to use when conducting STPA analyses, more systems can be developed that enhance human performance rather than increase conflict between humans and the engineered system.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows</title>
<link href="https://hdl.handle.net/1721.1/163050" rel="alternate"/>
<author>
<name>Lin, Fayleon</name>
</author>
<id>https://hdl.handle.net/1721.1/163050</id>
<updated>2025-10-07T04:14:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of Algorithms for Quantitative Analysis of&#13;
Long Electrical Arcs in Crossflows
Lin, Fayleon
A single lightning strike can deliver a steady current of hundreds of amps during its attachment to an aircraft. Therefore, it is imperative to have an adequate lightning protection system in the aircraft to minimize the probability of catastrophic accidents. Current guidelines for lightning protection systems are based on prior service experience and historical data, which might become insufficient with future generation aircraft. These often adopt novel and unconventional aircraft designs, often deviating significantly from current designs. Therefore, efforts are underway to update these guidelines with novel methods such as designs aided by numerical simulation that can accurately model the behavior of lightning attachment and the subsequent swept-stroke phase. To aid in the development of these numerical methods, ample data of not only the electrical arcs but also their interactions with the surrounding flow are necessary for validation. However, most studies on long electrical arcs lack a detailed investigation of the coupling between the electrical arcs and the surrounding flow field. For that purpose, teams from the Massachusetts Institute of Technology (MIT), ONERA, and Universitat Politècnica de Catalunya (UPC) conducted an extensive experimental campaign in April 2024 that investigates this coupling in detail for the first time. Data gathered from this experiment include electrical properties of the arc, high-speed video of the arc column, and the velocity field of the surrounding flow. Approximately 200 cases were conducted with various geometrical and electrical configurations. To meaningfully analyze all the data, a set of algorithms was developed to automatically process, analyze, and visualize these data. Detailed analysis of the root and column behavior was performed; electrical properties were verified to be consistent with literature values; and coupling between the velocities of the arc column and the flow field was determined by simultaneous visualization of both data forms.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions</title>
<link href="https://hdl.handle.net/1721.1/163049" rel="alternate"/>
<author>
<name>Bahlous-Boldi, Adam A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163049</id>
<updated>2026-01-13T19:42:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Performance and Analysis of a Deployable Diffractive Optical Element for Small Satellite Missions
Bahlous-Boldi, Adam A.
As space missions push toward smaller, lighter, and more deployable instrumentation, diffractive optical elements (DOEs) offer a compelling alternative to traditional optics. Their ability to focus light through engineered phase profiles rather than curved surfaces allows for large-aperture, flat optics that are far lighter and easier to package for launch. However, this benefit comes with trade-offs: DOEs are sensitive to wavelength mismatch, manufacturing errors, and environmental deformations—especially thermal gradients and membrane tensioning in space. This thesis develops a comprehensive framework for understanding and simulating the performance of DOEs under realistic operating conditions. Beginning from first principles, the work contrasts geometric and wave-optical models for Fresnel zone plates and multilevel diffractive lenses, leading to quantitative predictions of diffraction efficiency and PSF quality under non-idealities. A key contribution is the analytical and numerical analysis of how uniform thickness errors, wavelength mismatches, and thermal expansions degrade optical performance, both in efficiency and wavefront fidelity. To evaluate these effects in detail, a flexible simulation tool was developed in MATLAB, enabling both Fourier and integral-based propagation through arbitrarily deformed DOEs. These models are applied to a conceptual space-based LIDAR system—SPECIES—that uses a deployable DOE optic to demonstrate the feasibility and limitations of this approach. The results show that DOEs can tolerate some global deformations - for example, a 1 mm deformation results in a 38% performance loss in an F3 LiDAR system with a 1 mm detector diameter. However, they remain highly sensitive to fine-scale shape errors, posing significant challenges for high-precision applications like fiber coupling or imaging. The findings provide new insight into the tolerances, benefits, and trade-offs of DOEbased systems in space, and lay the groundwork for future missions seeking to leverage lightweight diffractive optics for remote sensing and optical communication.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure</title>
<link href="https://hdl.handle.net/1721.1/163048" rel="alternate"/>
<author>
<name>Davalos, Daniela L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163048</id>
<updated>2025-10-07T04:14:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Partial Gravity Load Simulation Using Mechanical Off-Loading and Lower Body Negative Pressure
Davalos, Daniela L.
Prolonged exposure to reduced gravity environments can lead to significant deconditioning of the cardiovascular, musculoskeletal, and ocular systems. These effects increase the risk of orthostatic intolerance, bone loss, and conditions such as Spaceflight Associated Neuro-ocular Syndrome (SANS). As spaceflight missions grow longer and more frequent, especially with increased extravehicular activity (EVA) on the Moon or Mars, it is critical to develop effective countermeasures and Earth-based analogs to simulate these gravitational environments and evaluate physiological impacts. This thesis addresses these challenges through two complementary approaches. First, it presents the design and development of the MIT Moonwalker IV, a passive mechanical offloading system that simulates partial gravity by applying vertical support via a spring-cable mechanism. In a treadmill-based pilot study, one participant showed at least a 50% reduction in metabolic demand while running under simulated Martian gravity. These findings validate the Moonwalker IV as a metabolic analog for EVA task simulation. Second, this thesis evaluates a collapsible lower body negative pressure (LBNP) suit as a wearable countermeasure for micro and partial gravity environments. By applying negative pressure to the lower body, the suit helps restore the mechanical loading and hydrostatic fluid gradients typically provided by Earth’s gravity. The suit was tested in both simulated reduced gravity via a head-down/head-up tilt paradigm and and true reduced gravity via parabolic flight. Each condition was evaluated both with and without –20 mmHg of LBNP. Results demonstrated that the collapsible LBNP suit produced cardiovascular responses comparable to those observed in traditional rigid LBNP chambers. It also induced lower body fluid shifts as measured by segmental leg bioimpedance, reduced intraocular pressure, and generated ground reaction forces similar to standing in 1G. These findings support the complementary use of Earth-based analog systems to simulate partial gravity and wearable devices to simulate Earth gravity in reduced gravity environments. They offer valuable tools for preparing astronauts and preserving physiological health during long-duration space missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unforgettable Generalization in Language Models</title>
<link href="https://hdl.handle.net/1721.1/163047" rel="alternate"/>
<author>
<name>Zhang, Eric</name>
</author>
<id>https://hdl.handle.net/1721.1/163047</id>
<updated>2025-10-07T04:14:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Unforgettable Generalization in Language Models
Zhang, Eric
When language models (LMs) are trained to forget (or “unlearn”) a skill, how precisely does their behavior change? We study the behavior of transformer LMs in which tasks have been forgotten via fine-tuning on randomized labels. Such LMs learn to generate near-random predictions for individual examples in the “training” set used for forgetting. Across tasks, however, LMs exhibit extreme variability in whether LM predictions change on examples outside the training set. In some tasks (like entailment classification), forgetting generalizes robustly, and causes models to produce uninformative predictions on new task instances; in other tasks (like physical commonsense reasoning and scientific question answering) forgetting affects only the training examples, and models continue to perform the “forgotten” task accurately even for examples very similar to those that appeared in the training set. Dataset difficulty is not predictive of whether a behavior can be forgotten; instead, generalization in forgetting is (weakly) predicted by the confidence of LMs’ initial task predictions and the variability of LM representations of training data, with low confidence and low variability both associated with greater generalization. Perhaps most surprisingly, random-label forgetting appears to be somewhat insensitive to the contents of the training set: for example, models trained on science questions with random labels continue to answer other science questions accurately, but begin to produce random labels on entailment classification tasks. Finally, we show that even generalizable forgetting is shallow: linear probes trained on LMs’ representations can still perform tasks reliably after forgetting. Our results highlight the difficulty and unpredictability of performing targeted skill removal from models via fine-tuning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems</title>
<link href="https://hdl.handle.net/1721.1/163045" rel="alternate"/>
<author>
<name>Wu, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/163045</id>
<updated>2025-10-07T04:14:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Guessing Random Additive Noise Decoding in Coded&#13;
Multiple-Input Multiple-Output Systems
Wu, Benjamin
Multiple-Input Multiple-Output (MIMO) wireless communication systems incorporate forward error correction (FEC) to achieve high reliability under fading and interference. In this thesis, we explore the emerging FEC paradigm of Guessing Random Additive Noise Decoding (GRAND) in a point-to-point MIMO system. &#13;
Treating GRAND as an FEC decoder disjoint from the MIMO detector, we compare the soft-decision Ordered Reliability Bits GRAND (ORBGRAND) to CRC-Assisted Successive Cancellation List (CA-SCL) decoding of the CRC-Assisted Polar (CA-Polar) [105, 128] code found in the 5G New Radio standard. For this code, we find that ORBGRAND outperforms CA-SCL (list size 16) by 1 dB E_b/N₀ at block error rate of 10⁻³, under 16-QAM and Linear Minimum Mean Square Error detection, with two transmit antennas and four receive antennas. We also show that ORBGRAND, when paired with other moderate redundancy linear codes, can yield substantial savings in the range of 0.5 − 2 dB in E_b/N₀ over CA-SCL decoding (list size 16) of CA-Polar codes with the same code parameters, for a block error rate of 10⁻³. We provide extensive benchmarks comparing ORBGRAND to CA-SCL and other soft-decision GRAND variants. We also integrate a GRAND decoder producing soft output into a MIMO iterative detection and decoding (IDD) receiver. Specifically, we apply an established technique which utilizes soft-output GRAND as the component decoder for the block turbo decoding of product codes. This block turbo decoder is evaluated as a soft output decoder within a MIMO IDD receiver. We demonstrate competitive or superior performance relative to Belief Propagation (BP) decoding of 5G Low-Density Parity Check (LDPC) codes. This approach also marks a use of GRAND for low-rate, high-redundancy FEC in a MIMO system. With GRAND in MIMO still being an emerging area of research, this work is an exploratory evaluation of GRAND for FEC in MIMO, and highlights GRAND’s potential as a versatile and performant decoder in different MIMO receiver architectures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing</title>
<link href="https://hdl.handle.net/1721.1/163044" rel="alternate"/>
<author>
<name>Wu, Jessica L.</name>
</author>
<id>https://hdl.handle.net/1721.1/163044</id>
<updated>2025-10-07T04:14:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving Accuracy Predictions of Companion Classifiers&#13;
for LLM Routing
Wu, Jessica L.
The increasing versatility of Large Language Models (LLMs) calls for developing effective routing systems to match tasks with the most suitable models, balancing accuracy and computational cost. This research introduces a novel meta-cascade routing framework that combines meta-routing, where a predictive model selects the appropriate LLM for a task, and cascading, where models are queried in sequence to optimize cost and performance. A critical component of this framework is the companion classifier, defined as a fine-tuned model trained to predict whether a particular LLM will generate an accurate response. We investigate whether incorporating features such as model responses into these classifiers can improve routing accuracy. Our preliminary experiments, using the Routerbench dataset, focus on training companion models that provide more stable and accurate routing decisions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq</title>
<link href="https://hdl.handle.net/1721.1/163042" rel="alternate"/>
<author>
<name>Teshome, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/163042</id>
<updated>2025-10-07T04:14:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Formal Verification of Relational Algebra Transformations&#13;
in Fiat2 Using Coq
Teshome, Christian
Data-intensive applications often involve operations over structured datasets, such as filtering, joining, and projecting records. Relational database systems generally use query planners to optimize high-level SQL queries into efficient execution plans. While these systems apply well-established query transformations, they typically assume the correctness of these transformations rather than formally proving them. The absence of formal guarantees can be a significant limitation for systems with strict correctness requirements. This thesis contributes to Fiat2, a Python-like high-level programming language for data-intensive workloads that integrates formal verification via the Coq proof assistant. We focus on proving the correctness of several rewrite-based query optimizations commonly used in database engines. Specifically, we formalize and prove the correctness of algebraic rewrites involving combinations of filters, joins, and projections, as well as join-reordering rewrites. All rewrites are proven in Coq to preserve the semantics of the original program under list semantics, meaning that the output lists are fully equivalent (or permutations, in the case of join reordering). These verified rewrites serve as a foundation for future optimization in Fiat2, enabling significant optimizations while preserving the semantics of the original queries with correctness guarantees. The results demonstrate the feasibility of integrating formally verified query optimizations into a practical high-level programming language.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Converting PyTorch Models to StreamIt Pipelines</title>
<link href="https://hdl.handle.net/1721.1/163041" rel="alternate"/>
<author>
<name>Rajvee, Muhender Raj</name>
</author>
<id>https://hdl.handle.net/1721.1/163041</id>
<updated>2025-10-07T04:14:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Converting PyTorch Models to StreamIt Pipelines
Rajvee, Muhender Raj
With the rise of large language models, there have been efforts to optimize machine learning inference to support a large volume of queries. Currently, the two main ways to do this are running optimized kernels for computing the forward inference pass and distributing computation across multiple GPUs or different cores in a GPU. Machine learning libraries such as PyTorch produce dynamic computation graphs in order to represent the forward pass of the model. PyTorch allows conversion of these dynamic graphs into static ones through just-in-time (JIT) compilation. These graphs can then be optimized further by the compiler. We propose an alternate way of optimizing these dynamic graphs. We convert the dynamic computation graph of PyTorch to pipelines in StreamIt, a domain specific language (DSL) for streaming applications, and use the multi-stage compilation property of BuildIt to compile this pipeline in stages to inference code. We found that, while the inference latencies of models compiled in this way are slightly higher, they are still comparable to those of PyTorch models and are open to future optimizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering</title>
<link href="https://hdl.handle.net/1721.1/163040" rel="alternate"/>
<author>
<name>Ramkumar, Vayd Sai</name>
</author>
<id>https://hdl.handle.net/1721.1/163040</id>
<updated>2025-10-07T04:14:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Interactive Visual Paradigm for Knowledge Graph&#13;
Question-Answering
Ramkumar, Vayd Sai
In an era of information overload, verifying data reliability and provenance is critical, yet knowledge graphs (KGs) often remain complex for non-expert users. This thesis introduces TRACE, a Reasoning and Answer-path Comprehension Engine, a visualization tool enhancing transparency in KG question answering (KGQA). By abstracting intricate KGs into intuitive meta-nodes, TRACE simplifies exploration of large, multi-topic datasets. Its interactive interface allows users to navigate semantic communities and trace reasoning paths, fostering trust through clear answer derivation. Unlike cluttered traditional graph visualizations, TRACE’s meta-node approach provides a scalable, user-friendly solution, concealing technical complexities while enabling robust query validation. Large language models support natural language query parsing and community summarization, making KGs accessible to diverse audiences. TRACE positions itself as a vital widget for information platforms, empowering users to counter misinformation confidently. A user study and pipeline evaluation confirmed TRACE’s intuitive interface excels for complex queries, though multi-hop paths pose challenges, while processing tests demonstrated its scalable paradigm for large datasets. By prioritizing transparency and usability, TRACE redefines KGs as reliable tools for knowledge discovery, laying a foundation for future systems to deliver trustworthy, accessible information in a digital landscape fraught with uncertainty.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spectral Analysis of Local Atomic Environments</title>
<link href="https://hdl.handle.net/1721.1/163039" rel="alternate"/>
<author>
<name>Phung, Tuong</name>
</author>
<id>https://hdl.handle.net/1721.1/163039</id>
<updated>2025-10-07T04:14:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Spectral Analysis of Local Atomic Environments
Phung, Tuong
The representation of local environments is a cornerstone challenge in computational materials science, with profound implications for property prediction and materials discovery. This thesis presents a comprehensive investigation of spectral descriptors constructed from spherical harmonic expansions to represent the geometries of local atomic environments. Systematic computational experiments evaluate the robustness of these descriptors to geometric perturbations and their capacity to differentiate structurally similar configurations. The findings reveal a clear performance hierarchy, with higher-order descriptors offering increased geometric expressivity and reconstruction accuracy in resolving challenging structural cases. This research further examines methods for inverting spectral representations back to atomic coordinates, demonstrating that directly optimizing three-dimensional positions through gradient-based techniques yields markedly better reconstruction accuracy than approaches operating in Fourier space. Dimensionality reduction via latent space embeddings is also explored, showing that essential geometric features can be preserved in significantly compressed representations. Through methodical analysis of descriptor limitations, performance boundaries, and sensitivity to hyperparameters, this work establishes practical benchmarks and implementation guidelines for spectral descriptors. These contributions strengthen the foundation for reliable machine learning models in computational materials science, advancing both the accuracy and efficiency of atomic-scale modeling for materials design and discovery.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Shipping Container for Package-Less Units</title>
<link href="https://hdl.handle.net/1721.1/163038" rel="alternate"/>
<author>
<name>Minja, Baraka</name>
</author>
<id>https://hdl.handle.net/1721.1/163038</id>
<updated>2025-10-07T04:14:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Shipping Container for Package-Less Units
Minja, Baraka
Package-less shipping aims to deliver units without company X’s added packaging. This requires the fulfillment systems and processes to have gentler handling. Part of this change involves the design and implementation of a container that will carry units from a distribution center to a delivery facility. This thesis presents the container analysis that was completed to determine what the optimal container features and container type are for package-less shipping. &#13;
Collapsible bags provide the best solution for package-less shipping in comparison to nestable and collapsible totes. Since ergonomic weight is the limiting constraint, the lower weight of the collapsible bag will allow for 1 or 2 more units per container. In addition, it benefits from 1) lower process cost for returning to dock (3.7% cost reduction as compared to a nestable tote) 2) better ergonomics (collapsible tote has undesirable pinch points) and 3) improved cycle time (estimated 2s to open/collapse compared to 4s for collapsible tote).&#13;
Additional considerations that require more analysis relate to units per container and relocation.  Based on company X’s past orders and unit types for the package-less shipping process, it is estimated that ~210 units per container (17.08 cu. Ft.) is the max achievable for NA before it reaches the ergonomic weight cap. However, company X is expecting the package-less shipping distribution center process to be constrained to ~105-133 units. Analysis of container relocation from delivery facilities to distribution centers indicates it is worthwhile investigating alternative relocation strategies in lieu of dedicated 53-foot container trailers to achieve lower relocation costs. &#13;
The collapsible bag is the best option assuming it has at least an expected lifetime of 2 years, which is when its NPV exceeds that of the two alternatives. These results are sensitive to assumptions made, and it is necessary to fine tune this analysis when the end-to-end package-less shipping process has been fully mapped out.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics</title>
<link href="https://hdl.handle.net/1721.1/163037" rel="alternate"/>
<author>
<name>Nakamura, Haley Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/163037</id>
<updated>2025-10-07T04:14:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transformation Tolerance of Facial Recognition&#13;
Technology and Informative Evaluation Metrics
Nakamura, Haley Marie
Over the last decade, machine learning based facial recognition (FR) systems have continued to increase in popularity while spreading to unique deployment settings. Despite the large variance among FR input distributions, popular facial recognition benchmarks continue to characterize system performance using one aggregate score over a single dataset. In many cases, the limitations of this score are unclear to downstream users: assuming benchmark accuracy is high, how is it expected to change for an image sampled from a distinct distribution? Which transformations can the model handle robustly, and which cause failure? Meanwhile, there is a large body of human facial perception research that aims to understand the underlying mechanisms of human recognition. This field offers methodological inspiration for more informative evaluation techniques, including the characterization of recognition performance as a function of a quantifiable input transformation. This work performs such an analysis. The performance scores of five state-of-the-art FR models are characterized as a function of Gaussian blur strength, intersecting with color variation. The performance-blur relationship is modeled as an s-curve, creating a highly interpretable format for discussion. Blur strength was consistently statistically significant to performance, but color variation did not significantly impact any model. Results are then compared to prior human recognition experiments. The best models outperform humans in low-blur regimes while humans outperform all models in high-blur regimes. These results motivate the need for modern benchmarks that capture a range of input distributions. The analysis presented can lead to a deeper understanding of FR systems, and provide a clearer interpretation of how model performance changes under quantified distribution shifts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach</title>
<link href="https://hdl.handle.net/1721.1/163036" rel="alternate"/>
<author>
<name>Magzoub, Amna Ahmed Eltayeb</name>
</author>
<id>https://hdl.handle.net/1721.1/163036</id>
<updated>2025-10-07T04:13:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design Transfer as a Lever for Accelerated Medical Device Innovation: A Case-Based Mapping Approach
Magzoub, Amna Ahmed Eltayeb
In highly regulated industries such as medical devices, accelerating New Product Development (NPD) without compromising quality or compliance is a persistent challenge. This thesis investigates the design transfer process, a critical, yet under- examined phase of NPD, as a strategic lever to reduce time-to-market. The project uses swimlane flowcharts and Design Structure Matrices (DSM) to map real-world processes, identify breakpoints, and classify rework (both planned and unplanned) in four case studies from Stryker Corporation. Key patterns emerged across case types: insufficient early-stage validation, misaligned cross-functional communication, and inadequate integration with suppliers were recurrent drivers of inefficiency. Compara- tive analysis revealed that concurrent engineering practices and knowledge sharing significantly reduce unplanned rework cycles and improve development speed. The study proposes actionable recommendations for optimizing design transfer including: leveraging corporate know-how through intentional knowledge transfer meetings dur- ing the process benchmarking process, increased risk-taking during the development process by embracing concurrent engineering approaches, and investing in early-stage co-development by adopting regular collaboration activities with suppliers. These findings can inform broader process improvements in the development of medical devices, and serve as a blueprint for other complex, cross-functional environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of the Solar Cycle on Satellite Orbital Lifetime</title>
<link href="https://hdl.handle.net/1721.1/163035" rel="alternate"/>
<author>
<name>Lisy, Celvi A.</name>
</author>
<id>https://hdl.handle.net/1721.1/163035</id>
<updated>2025-10-07T04:15:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Effect of the Solar Cycle on Satellite Orbital Lifetime
Lisy, Celvi A.
The lifetime of a satellite in Low Earth Orbit (LEO) is affected by the 11-year solar cycle. At a fixed altitude, increasing solar activity increases atmospheric density which leads to an increase in drag, and a decrease in mission lifetime without using propulsion to recover altitude. Satellites may have longer orbital lifetimes if more of their mission is operational during a solar minimum due to lower solar activity and lower atmospheric drag. Satellites with larger area-to-mass ratios generally have shorter orbital lifetimes than satellites with small area-to-mass ratios. Missions that get delayed and have more of their operations during solar maximum than planned originally may have too short of a mission lifetime or, conversely, may be at risk of increasing their orbital lifetime past regulatory limits (five years for satellites in LEO according to the FCC) if they launch closer to solar minimum. For example, a satellite with an area-to-mass ratio of 0.014 m2/kg – such as a 1U CubeSat – and a one-year mission that is launched in 2021 without onboard propulsion, would have an orbital lifetime of 1.051 years. However, if that mission were delayed a year, a common occurrence in the industry, it would no longer be able to achieve its mission as its orbital lifetime with a deployment in 2022 is 0.44 years. Conversely, if the same 1U CubeSat is launched during solar max in January 2025, it would have an orbital lifetime of 2.2 years, and would re-enter in February of 2027. However, if that mission were delayed a year, the satellite would launch in January 2026 and instead be in orbit for 6.4 years before re-entering. They could be fined for violating the FCC deorbit limit of five years. This thesis quantifies the effect of launch or processing delays on satellite orbital lifetime based on their orbit altitude and vehicle parameters such as mass, cross sectional area, altitude, and bus size. In general, it is found that four-year and six-year delays have the greatest effect on a satellite’s orbital lifetime because the satellite will be deorbiting almost half a solar cycle (5.5 years) from its intended deployment year. However, two-year delays can still affect satellite operators, as they can increase the orbital lifetime, even by up to 1.5 years for low area-to-mass ratio satellites in 400 km orbits and almost five years for satellites in orbits higher than 500 km. Two-year delays can also decrease the orbital lifetime of a satellite by up to 1.7 years for low area-to-mass ratio satellites in 400 km orbits and almost two years at altitudes higher than 500 km.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors</title>
<link href="https://hdl.handle.net/1721.1/163034" rel="alternate"/>
<author>
<name>Rao, Sankarsh R.</name>
</author>
<id>https://hdl.handle.net/1721.1/163034</id>
<updated>2025-11-24T15:39:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Electrical Diagnostics for Nanosecond Pulsed Discharge Reactors
Rao, Sankarsh R.
This thesis provides an introduction to transmission line theory (telegrapher’s equations) as the mathematical background needed to correctly perform and interpret electrical measurements in nanosecond pulsed discharge reactors. The mathematical framework is implemented in a numerical tool called VI-View, which is made available to the community to aid with the interpretation of electrical measurements and help explain discrepancies between different experimental arrangements and probe configurations. A brief manual on how to use the tool is provided, followed by a series of six case studies relevant to experimental setups/situations encountered in practice. The analysis of these case studies summarizes best practices when performing electrical and energy measurements in nanosecond pulsed discharge reactors. Case Studies 1 and 2 cover in-situ and remote measurements for reactors using one voltage and one current probe. Case Study 3 covers how two current probes, one on the high-voltage end and one on the low-voltage end, can achieve the same energy measurements as Case Studies 1 and 2. Case Studies 4 and 5 show how cables with varying lengths and dissimilar properties — as can sometimes be encountered in practice — affect the electrical signals. Case Study 6 shows how a variable resistance — a step drop from 50MΩ to 10Ω — within a load can be a first approximation to a plasma reactor with a discharge. Finally, an outlook on how these case studies connect to real, experimental waveforms is presented along with the limitations of the tool.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies</title>
<link href="https://hdl.handle.net/1721.1/163033" rel="alternate"/>
<author>
<name>Reider, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/163033</id>
<updated>2025-10-07T04:15:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stochastic Methods for Setting Effective Aviation NOₓ&#13;
Policies
Reider, Sarah
Nitrogen Oxides (NOₓ) from aviation emissions are well known to have detrimental effects on air quality and the climate. Presently, they are regulated to preserve local air quality around airports. As part of the regulation process, aircraft engines are placed on a test stand with NOₓ levels measured at different thrust settings meant to mimic the aircraft’s emissions during landing and take-off. These are then constrained as a function of the engine’s overall pressure ratio (OPR) and rated thrust, with the allowed NOₓ emissions increasing with OPR. Despite increases in the stringency of this regulation, recent research suggests this regulation is insufficient for protecting surface air quality degradation from NOₓ emissions at cruise. Moreover, at high OPRs, NOₓ emissions increase substantially for relatively small reductions in fuel burn. In light of this, a new metric representative of cruise emissions is being investigated. This work considers effective methods to define this new regulation given a wide range of uncertainties in the tradeoff between NOₓ and CO₂ emissions at high OPRs. First, an estimate for the combined climate and air quality cost of NOₓ from aviation cruise emissions is estimated as ∼$95,000/tonne using a 2019 flight inventory. Then, cruise limits are proposed informed by the combined impact of NOₓ and CO₂ at cruise and with a similar slope to the current LTO standard. Finally, a Monte Carlo simulation is run, sampling NOₓ and CO₂ social costs for a series of hypothetical aircraft designed using the open-source Transportation Aircraft System OPTimization (TASOPT) model. This work takes a worst-case scenario approach, where the only response engine manufacturers can make to stricter standards is to reduce OPR and sacrifice fuel efficiency. Each aircraft’s emissions are evaluated during cruise to determine the probability of increasing environmental harm under different policy scenarios given these uncertainties. The combined cost of NOₓ and CO₂ are compared to the baseline engines that meet current regulations for each scenario. Results show defining a cruise metric informed by the weighted combined cost of CO₂ and NOₓ could reduce total environmental cost at cruise by 15 – 43% while carrying a 6 – 7.4% risk of increasing total environmental cost for wide-body aircraft engines in the most stringent scenario. Less stringent scenarios showed similar risks of increasing harm for smaller potential environmental savings. In all cases, the risks associated with the proposed limits are driven by low-likelihood extremes in the uncertainty distributions of NOₓ and CO₂, further suggesting the benefit of an environmentally conscious standard.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration</title>
<link href="https://hdl.handle.net/1721.1/163032" rel="alternate"/>
<author>
<name>Gomez, Annabel Reyna</name>
</author>
<id>https://hdl.handle.net/1721.1/163032</id>
<updated>2025-10-07T04:15:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Domain-Independent Mode Estimation for Human-Robot&#13;
Collaboration
Gomez, Annabel Reyna
To collaborate safely and intelligently with humans, robots must infer high-level semantic sates, such as intentions or interaction modes, from uncertain sensor input. While dynamic, probabilistic mode estimation is commonly used in fault diagnosis, this thesis extends the problem to activity recognition, where the goal is to estimate qualitative, symbolic human-object interaction states in real time. Robust human activity recognition is essential for collaborative and assistive robotics, particularly in dynamic or safety-critical environments. The core solution presented in this thesis is a mode-estimator and its efficient implementation using the A* with bounding conflicts (A*BC) algorithm. This performs best-first enumeration over symbolic activity states while integrating recursive Bayesian filtering to maintain belief under noisy observations. Unlike low-level trajectory tracking or deep-learned classifiers, qualitative spatial filtering operates at the right level of abstraction to recognize symbolic actions. It can also generalize across domains with minimal retraining and support efficient, probabilistically grounded reasoning about uncertainty in both perception and symbolic mode transitions. The proposed system fuses RGB-D perception, object segmentation, qualitative spatial reasoning (QSR), and probabilistic inference into a real-time pipeline capable of tracking and inferring symbolic human-object interaction states. Evaluated in a human-robot rehabilitation setting, this domain-independent system successfully infers latent human and object activity states from noise RGB-D data. It resolves ambiguity using Vision-Language Model (VLM)-guided semantic arbitration and demonstrates robustness and adaptability in unstructured environments. This work establishes qualitative spatial filtering with A*BC as a generalizable and efficient solution for semantic activity recognition, laying the foundation for future perception-driven collaborative systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems</title>
<link href="https://hdl.handle.net/1721.1/163031" rel="alternate"/>
<author>
<name>Gada, Hiya Akhil</name>
</author>
<id>https://hdl.handle.net/1721.1/163031</id>
<updated>2025-10-07T04:15:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Distributed Energy Dynamics Control for Stable Power&#13;
Electronic-Enabled Electric Power Systems
Gada, Hiya Akhil
The increasing penetration of renewable and inverter-based resources is transforming modern power systems into fast, nonlinear, and heterogeneous networks. These converterdominated systems operate on timescales much faster than traditional synchronous machines, making conventional modeling and control approaches, rooted in quasi-static phasor analysis and centralized architectures, inadequate for ensuring stability and scalability. This thesis adopts an energy space modeling approach grounded in first principles of energy conservation and system interconnection. It extends the previously introduced second-order energy dynamics model by relaxing the assumption that energy in tangent space can be treated as an independent disturbance. The resulting contribution is a third-order model that treats stored energy in tangent space as a dynamic state, enabling more expressive and accurate modeling of fast-timescale system behavior. Leveraging this extended energy space model, the thesis develops a multilayered distributed control architecture in which the nonlinear physical dynamics of each component are lifted to the higher-level linear energy space, capturing internal energy dynamics and real/reactive power flows, and integrated with the lower-level physical dynamics with well-defined mappings. Distributed controllers are designed in this energy space using only local states and minimal neighbor interaction, assuming a system-level coordination mechanism provides consistent references. Two control strategies, energy-based feedback linearizing control and sliding mode control, are developed and shown to achieve asymptotic convergence to reference outputs. The framework is validated on two systems: an inverter-controlled RLC circuit and a synchronous generator under load. Finally, the energy space framework is extended to structurally model inter-area oscillations (IAOs). An inter-area variable is defined as the difference between power incident on a tie-line from Area I and power reflected into tie-line from Area II. Simulations on a 3-bus, 2-area system confirm consistency with eigenmode analysis and show how tie-line strength and generator inertia affect IAO dynamics. A novel resonance phenomenon is also identified: instability arising from interaction between a system’s natural IAO frequency and time-varying disturbances from intermittent DERs. This previously unmodeled behavior is captured explicitly within the energy dynamics framework and may help explain recent blackout events in the Iberian Peninsula.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pushing the Limits of Active Data Selection with Gradient Matching</title>
<link href="https://hdl.handle.net/1721.1/163030" rel="alternate"/>
<author>
<name>Zhang, Chris</name>
</author>
<id>https://hdl.handle.net/1721.1/163030</id>
<updated>2026-01-23T15:40:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pushing the Limits of Active Data Selection with Gradient Matching
Zhang, Chris
As modern machine learning systems grow in scale, the inefficiencies of training on large, noisy, and imbalanced datasets have become increasingly pronounced—particularly in computer vision, where real-world data often contain labeling errors, occlusions, and redundancy. While large models can partially compensate by training exhaustively on massive datasets, this indiscriminate approach is computationally expensive and often inefficient. Active data selection offers a more efficient alternative by prioritizing examples that contribute most to model improvement. However, existing selection strategies (such as Rho Loss) still fall short of the optimal achievable performance. In this work, we propose the Gradient Informed Selection Technique (GIST), an active data selection method that prioritizes examples based on their gradient alignment with a small, fixed holdout set. At each training step, GIST computes perexample gradients and selects those that are most aligned with the holdout gradient, thereby guiding model updates toward better generalization. We evaluate GIST on noisy (Clothing1M) and clean (ImageNet) datasets and show that it consistently outperforms baselines across a range of selection ratios—that is, the proportion of a batch of data that the model selects to update weights on. To address the computational overhead of gradient-based selection, we introduce efficient variants using restricted-layer gradients, low-rank approximations, and gradient quantization. We also analyze GIST’s selection behavior, showing that it implicitly balances classes and repeatedly selects high-utility examples—two factors that enhance both robustness and learning efficiency. Our findings suggest that a more effective data curriculum is both discoverable and practical, and that GIST is a step toward achieving it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data</title>
<link href="https://hdl.handle.net/1721.1/163029" rel="alternate"/>
<author>
<name>Yao, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/163029</id>
<updated>2025-10-07T04:14:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Phase Transition for Recovering a Random&#13;
Hypergraph from its Edge Data
Yao, Andrew
The weighted projection of a hypergraph is the weighted undirected graph with the same vertex set and edge weight equal to the number of hyperedges that contain the edge; the projection is the unweighted graph with the same vertex set and edge set consisting of edges with weight at least one. For d ≥ 3, after observing the unweighted and weighted projection of a random d-uniform hypergraph that is sampled using a generalization of the Erdős–Rényi random model, we study the recovery of a fraction of the hyperedges and the entire hypergraph. For both cases, we show that there is a sharp phase transition in the feasibility of recovery based on the density of the hypergraph, with recovery possible only when the hypergraph is sufficiently sparse. Particularly, we resolve numerous conjectures from [5]. Furthermore, we display an efficient algorithm that is optimal for both exact and partial recovery. We also analyze the phase transition for exact recovery by exhibiting a regime of probabilities that is below the exact recovery threshold by a polylogarithmic factor for which exact recovery is possible.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor</title>
<link href="https://hdl.handle.net/1721.1/163028" rel="alternate"/>
<author>
<name>Yuan, Joyce</name>
</author>
<id>https://hdl.handle.net/1721.1/163028</id>
<updated>2025-10-07T04:14:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Empowering Mobile-Only App Generation — Offline AI&#13;
Code Generation with App Inventor
Yuan, Joyce
As digital tools become more accessible, creating software is becoming a powerful way for anyone to make real-world impact. Computational action—the idea that learners can build computing artifacts with authentic relevance to their lives and communities—reframes computing as a tool for empowerment. Low-code platforms like MIT App Inventor support this vision by fostering digital agency through purposeful creation. Recent advances in large language models (LLMs) expand these possibilities further by enabling code generation from natural language, offering a timely opportunity to lower the barrier to app creation. MIT App Inventor has long championed accessibility, allowing even young learners in underserved regions to build meaningful mobile apps. Its natural language tool, Aptly, enables users to describe app ideas and generate functional code. However, Aptly’s reliance on cloud-based LLMs limits access for users without stable internet—often those who could benefit most. This thesis addresses that challenge by enabling AI-powered app creation to run entirely offline on mobile devices. We fine-tune and quantize LLaMA 3B using QLoRA and deploy it on iOS with MLC LLM, enabling on-device inference without internet. We also introduce a custom evaluation framework tailored to Aptly’s grammar, combining a Tree-sitter parser and a modified CodeBLEU metric to assess both semantic and syntactic quality. Using curated evaluation datasets, we benchmark out-of-box and fine-tuned models across prompting strategies. In our evaluations, fine-tuned GPT-4.1 achieved the highest normalized CodeBLEU score (0.36 ± 0.12) and parsed over 81% of completions, outperforming its baseline by more than 5%. QLoRA-finetuned LLaMA improved parseability by 11.7% over its base model, showing progress in adapting smaller models to the Aptly domain, though semantic fidelity remains a challenge. Our results show that offline natural language–to–app generation is feasible, and that smaller models can be adapted to the Aptly domain. By lowering the technical and infrastructural barriers to app creation, this work lays the foundation to empower AI-assisted programming that is accessible, offline, and on the phone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison</title>
<link href="https://hdl.handle.net/1721.1/163027" rel="alternate"/>
<author>
<name>Woo, Andrew Kyoungwan</name>
</author>
<id>https://hdl.handle.net/1721.1/163027</id>
<updated>2025-10-07T04:15:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AutoDiff: A Scalable Framework for Automated Model&#13;
Comparison
Woo, Andrew Kyoungwan
Post-training adaptations such as supervised fine-tuning, quantization, and reinforcement learning can cause large language models (LLMs) with identical architectures to exhibit divergent behaviors. However, the mechanisms driving these behavioral shifts remain largely opaque, limiting the reliability and interpretability of adapted models. AutoDiff is a scalable, automated framework for tracing model divergence on a per-neuron basis. It exhaustively profiles every feed-forward (MLP) unit across a pair of models, identifies the neurons with the largest activation gaps, and links these differences to downstream behavioral changes. The pipeline identifies exemplars that maximize between-model activation divergence and clusters the highest-gap neurons into an interpretable, queryable difference report. Proof-ofconcept experiments on GPT-2 small validate AutoDiff’s ability to rediscover synthetic perturbations without manual supervision. A larger case study on Llama3.1–8B contrasts the base model with several adapted variants, surfacing neurons whose behavioral shifts align with observed topic-level gains and losses. By uncovering these mechanistic divergences, AutoDiff transforms black-box model updates into actionable insights, enabling safer deployment, principled debugging, and interpretable model evaluation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain</title>
<link href="https://hdl.handle.net/1721.1/163026" rel="alternate"/>
<author>
<name>Xia, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/163026</id>
<updated>2025-10-07T04:15:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Schrödinger’s Carbon: Until Measured, Operational Emissions Remain Uncertain
Xia, Julia
Rapidly improving generative artificial intelligence has led to significant investments in datacenter infrastructure, driving power demand, and raising environmental concerns. This has led to a growing body of research towards modeling embodied and operational carbon of datacenter servers across a variety of paradigms. However, most existing models take in deterministic inputs and output a singular average value that does not capture the inherent variability in estimating embodied and operational carbon emissions. Further, these average outputs obscure the impact of interacting factors, such as those related to deployment or software characteristics; each of which has its own underlying uncertainty distribution. This means in most cases, these averages do not accurately represent a particular server’s context. This thesis explicitly parameterizes and quantifies the full probabilistic distribution of operational carbon in AI inference tasks. It explores several factors of variability— deployment, spatiotemporal, and computational profile— and quantifies their impact on the overall carbon footprint through statistical and sensitivity analysis. While this work focuses on operational carbon, uncertainty propagation and understanding of variability should be used across a datacenter server’s entire life cycle. When this methodology is used alongside the existing uncertainty-aware embodied carbon measurements, it enables a holistic assessment from cradle to grave. This facilitates informed decision-making in server replacement, workload scheduling, hardware procurement, capacity planning, and more scenarios.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization</title>
<link href="https://hdl.handle.net/1721.1/163025" rel="alternate"/>
<author>
<name>Wen, Haoran</name>
</author>
<id>https://hdl.handle.net/1721.1/163025</id>
<updated>2025-10-07T04:15:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ideator Explorer: Enhancing AI-Assisted Ideation through&#13;
Interactive Visualization
Wen, Haoran
Current AI-assisted ideation systems, often based on linear chat interfaces, struggle to help users effectively manage the complexity of creative exploration, hindering both divergent thinking across multiple paths and the convergent synthesis of ideas. This thesis introduces and evaluates Ideator Explorer, a human-AI ideation system built upon an interactive graph visualization interface designed to overcome these limitations. The core of the system is its spatial, tree-like representation of branching idea sequences. Formative user studies indicate that this visualization approach is preferred over chat interfaces for its organizational benefits and its effectiveness in helping users track parallel lines of thought during exploration. The spatial layout inherently supports both the exploration of diverse idea branches (divergence) and the identification of potential connections (convergence). This research focuses on the design and evaluation of this interactive graph interface, examining how its specific visualization and interaction techniques impact the user’s ability to navigate, organize, and develop ideas within complex ideation processes. The primary contribution is a novel, visually driven interface paradigm for human-AI collaboration that enhances the management and exploration of the creative solution space.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Type Checker for Annotated Assembly Programs</title>
<link href="https://hdl.handle.net/1721.1/163024" rel="alternate"/>
<author>
<name>Zanders, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/163024</id>
<updated>2025-10-07T04:14:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Type Checker for Annotated Assembly Programs
Zanders, Julian
The rise of speculative-execution attacks, such as Spectre, has presented a security challenge to developers. Speculation on secret data can expose it, but running without speculation is suboptimal for runtime. To fix this, researchers have been evaluating “smart” speculation schemes, which determine when to speculate and when not to in order to balance runtime with security. Our lab proposes Octal, a solution that utilizes software and hardware in tandem. Data values are marked as secret or public using type inference, and the veracity of inference is checked using a type checker. Then, hardware can separate the secret and public values. My contributions were to the type checker, as well as some scripting to evaluate results.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR</title>
<link href="https://hdl.handle.net/1721.1/163023" rel="alternate"/>
<author>
<name>Tsao, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/163023</id>
<updated>2025-10-07T04:15:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Real-Time Non-Line-of-Sight Imaging Using&#13;
Single-Photon LiDAR
Tsao, Nicholas
Robust real-time imaging systems have allowed for many advances in robotics and autonomous navigation, though limited visibility in many real-world settings remains a significant challenge. Non-Line-of-Sight (NLOS) sensing allows for imaging systems to “see around corners", expanding their range of perception, providing access information for realtime decision-making. A promising approach to NLOS sensing is through single-photon LiDAR, which is commonly used for range-finding in many imaging systems. In addition to range-finding, single-photon LiDAR systems can provide a deeply rich data source in the form of photon count histograms after reflecting off scene geometry, capturing detailed information from multiple bounces. NLOS imaging can be achieved by parsing third-bounce light from such single-photon LiDAR sensors, which can be used for a variety of detection and localization tasks, and recent work has demonstrated capabilities in a wide range of applications. This work aims to further develop the NLOS imaging system by demonstrating a fully functional NLOS system using low-cost, consumer-grade SPAD hardware for real-time NLOS imaging, detection, and localization. We lay the ground work for NLOS imaging systems by developing infrastructure for NLOS processing in real-time, and we examine the potential for NLOS systems to operate on cheap hardware using data-driven approaches. Our work implements and demonstrates full end-to-end capacity for these NLOS imaging systems in a number of applications including person detection and localization, facilitating future research in this field and paving the way for NLOS integration into consumer devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Finetuning via Sparse Autoencoders</title>
<link href="https://hdl.handle.net/1721.1/163022" rel="alternate"/>
<author>
<name>Sivakumar, Ragulan</name>
</author>
<id>https://hdl.handle.net/1721.1/163022</id>
<updated>2025-10-07T04:14:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automated Finetuning via Sparse Autoencoders
Sivakumar, Ragulan
Currently, the field of interpretability is traditionally confined to diagnostics. However, this thesis presents a novel method using interpretability in sparse autoencoders to achieve better performance in small models via instruction finetuning. Specifically, we present UnderstandTune, an autonomous method for assembling high-quality instruction finetuning datasets with minimal human intervention, requiring only concise task descriptions rather than evaluation dataset distributions. Our empirical evaluations show that UnderstandTune consistently outperforms uninformed finetuning baselines across multiple benchmarks. Complementing this, Lalon introduces a mixture-of-informed-experts (MoIE) architecture that routes queries to specialized models independently finetuned via UnderstandTune. This modular approach achieves competitive performance against larger monolithic models in specialized domains, while utilizing fewer parameters, training examples, and computational resources. The framework’s modularity enables independent optimization of components from sparse autoencoders to MoIE routing mechanisms. This research demonstrates how interpretability can be used to enhance performance through intelligent data curation and suggests a new paradigm where interpretability and efficiency reinforce each other toward more capable, resource-efficient AI systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design</title>
<link href="https://hdl.handle.net/1721.1/163021" rel="alternate"/>
<author>
<name>Rubin, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/163021</id>
<updated>2025-10-07T04:14:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generative Machine Learning Models for RNA Structure&#13;
Prediction and Design
Rubin, Dana
Ribonucleic acid (RNA) is a fundamental molecule in biology, central to the regulation and execution of life’s most essential processes. Its diverse roles range from encoding genetic information to catalyzing biochemical reactions. Beyond its modern biological functions, RNA is also believed to have played a pivotal role in the origins of life which underscores the evolutionary significance of RNA. Unlocking the full potential of RNA research and design requires a deep understanding of the intricate relationship between RNA’s three-dimensional structure and sequence. Predicting RNA 3D structures remains a challenging problem due to the complexity of its folding landscape and the limited availability of high-resolution structural data. Inspired by recent advances in deep learning for protein folding and design, this thesis explores novel geometric and generative architectures for modeling RNA. We first present a systematic study on RNA structure prediction using equivariant neural networks within diffusion probabilistic models (DDPMs). Our folding model, named Klotho, captures local atomic interactions and structural features using SO(3)-equivariant message passing layers with a point cloud data representation. Ablation studies confirm that Klotho’s model performance scales with higher dimensionality and improves with enriching the input with secondary structure information and sequence embeddings from RNA foundation models. Building on this foundation, we introduce RiboGen, a multi modal deep learning model to jointly generate both RNA sequence and all-atom 3D structure. RiboGen integrates Flow Matching and Discrete Flow Matching within a unified multi modal representation and employs Euclidean Equivariant Neural Networks to learn geometric features. Our results demonstrate that RiboGen can generate chemically plausible, self-consistent RNA molecules, highlighting the potential of co-generative models to explore the sequence–structure landscape of RNA in a unified, data-driven framework. Together, these contributions advance the field of RNA modeling by offering scalable, symmetry-aware architectures for prediction and design. They lay the groundwork for future generative systems in RNA biology, therapeutic development, and biotechnological innovations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines</title>
<link href="https://hdl.handle.net/1721.1/163019" rel="alternate"/>
<author>
<name>Pan, Raymond</name>
</author>
<id>https://hdl.handle.net/1721.1/163019</id>
<updated>2026-01-23T15:35:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing a Data-Centric Framework for Predictive Maintenance of Wind Turbines
Pan, Raymond
Predictive maintenance of wind turbines is a machine learning task aimed at minimizing repair costs and improving efficiency in the wind turbine and renewable energy industry. Existing machine learning solutions often fail to meet real-world deployment requirements due to fragmented pipelines, lack of domain integration, and reliance on black-box models. Zephyr, a data-centric machine learning framework, addresses these challenges by enabling Subject Matter Experts (SMEs) to incorporate their domain knowledge into the prediction process, and to leverage automated tools for labeling, feature engineering, and prediction tasks without requiring extensive technical knowledge. However, the current version of Zephyr still has limitations, including usability gaps and a reliance on external tools for certain steps. Case studies with real-world data from the renewable energy company Iberdrola demonstrate Zephyr’s potential to integrate domain expertise into wind turbine predictive maintenance (thus streamlining the process) but also expose a sub-optimal user experience. This thesis explores gaps in the current state of the Zephyr framework and proposes refinements to enhance its usability. Key improvements include the consolidation of current tooling and relevant external libraries into a single API, state management with careful logging and exception handling, and improved support for model evaluation. These enhancements aim to support seamless end-to-end predictive modeling workflows, and to provide a more refined and flexible user experience for the Zephyr user base.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features</title>
<link href="https://hdl.handle.net/1721.1/163018" rel="alternate"/>
<author>
<name>Mishra, Kartikesh</name>
</author>
<id>https://hdl.handle.net/1721.1/163018</id>
<updated>2025-10-07T04:14:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minimalist Approach to End-to-End Vision Language&#13;
Navigation with Multi-Modal Foundation Model Features
Mishra, Kartikesh
Recent vision-language navigation (VLN) approaches leverage large models, prompt engineering, and/or explicit reasoning for instruction interpretation and agent guidance. We introduce MiniNav, a minimalist framework employing frozen vision-language foundation models as patch-wise feature extractors, avoiding data and compute heavy fine-tuning and cumbersome language model reasoning. Our lightweight control policies (∼ 10⁵ trainable parameters) are trained on a compact dataset of language-based specified navigational behaviors (∼ 10² runs, ∼ 10⁴ frames per behavior). We demonstrate generalization to novel objects and scenes, including direct real-world transfer, despite training on only two objects in a single simulated environment. Through its simple and scalable design, MiniNav provides an alternative to computationally intensive pipelines for robust real-world instruction-following. Our solution can provide a reference for evaluating the effective edge of more complex and larger VLN policies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models</title>
<link href="https://hdl.handle.net/1721.1/163017" rel="alternate"/>
<author>
<name>Mitchell, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/163017</id>
<updated>2025-10-07T04:14:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Sampling: A Framework for Enhancing Speed&#13;
and Performance of Financial Fraud Detection Models
Mitchell, Samuel
Financial fraud detection is a high-stakes field where rapid inference is essential. While state-of-the-art fraud detection models vary in terms of architectural decisions and appear to exhibit unique computational bottlenecks, we highlight that their run-times are all dominated by extensive information-gathering steps. These steps involve aggregating information from a large set of nodes or edges within a graph, and these intensive steps are performed O(|V |) or O(|E|) times during an inference forward pass, on a graph with |V | nodes and |E| edges. We introduce Strategic Sampling, a general method to accelerate these information-gathering steps. Our approach tailors sampling strategies based on the specific objective function used in each model’s information-gathering process, selecting the most relevant pieces of information to use in each step. This ensures that critical information is retained while significantly reducing the amount of data processed, thus speeding up the computation. We conceptually demonstrate how Strategic Sampling can be applied to message-passing Graph Neural Networks, Graph Transformers, and TGEditor (a state-of-the-art graph editing algorithm). To showcase the effectiveness of our proposed Strategic Sampling method, we implement it in the TGEditor codebase. Our results show that Strategic Sampling not only significantly reduces computation time by more than an order of magnitude, but also improves the F1 score, enhancing both efficiency and performance. This study underscores the potential of Strategic Sampling to universally boost the performance of various financial fraud detection models, paving the way for faster and more accurate fraud detection.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization Techniques for Trustworthy 3D Object Understanding</title>
<link href="https://hdl.handle.net/1721.1/163014" rel="alternate"/>
<author>
<name>Shaikewitz, Lorenzo Franceschini</name>
</author>
<id>https://hdl.handle.net/1721.1/163014</id>
<updated>2025-10-07T04:14:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization Techniques for Trustworthy 3D Object Understanding
Shaikewitz, Lorenzo Franceschini
Autonomous machines require reliable 3D object understanding to interpret and interact with their environment. In this thesis, we consider two tightly coupled 3D object understanding problems. Shape estimation seeks a consistent 3D model of an object given sensor data and some set of priors. Pose estimation seeks an estimate of the object’s position and orientation relative to an invariant shape frame. In general, these problems are non-convex and thus difficult to solve. We present algorithms which nonetheless solve shape and pose estimation efficiently and with assurances in terms of of optimality, uncertainty, or latency. We begin in the multi-frame tracking setting, where we propose the certifiably optimal estimator CAST⋆ for simultaneous shape estimation and object tracking. CAST⋆ uses 3D keypoint measurements extracted from an RGB-D image sequence and phrases the estimation as fixed-lag smoothing. Temporal constraints enforce rigidity and continuous motion. Despite the non-convexity of this problem, we solve it to certifiable optimality using a smallsize semidefinite relaxation. We also present a compatibility-based outlier rejection scheme to handle outliers, and evaluate the proposed approach on synthetic and real data. Next, we focus on estimating the pose of an object given its shape and a single RGB image (no depth). Assuming only bounded noise on 2D keypoint measurements (e.g., from conformal prediction), we derive an estimator for the most likely object pose which uses a semidefinite relaxation to initialize a local solver. We pair this with an efficient uncertainty estimation routine which relies on a generalization of the S-Lemma to propagate keypoint uncertainty to high-probability translation and rotation bounds. The high-probability bounds hold regardless of the accuracy of the pose estimate, and are reasonably tight when tested on the LineMOD-Occluded dataset. Lastly, we propose a sub-millisecond solution to simultaneous estimation of object shape and pose from a single RGB-D image. Our approach converts the first-order optimality conditions of the non-convex optimization problem to a nonlinear eigenproblem in the quaternion representation of orientation. We use self-consistent field iteration to efficiently arrive at a local stationary point, finding solutions more than an order of magnitude faster than Gauss-Newton or on-manifold local solvers on synthetically generated data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks</title>
<link href="https://hdl.handle.net/1721.1/163013" rel="alternate"/>
<author>
<name>Morrison, James C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163013</id>
<updated>2025-10-07T04:12:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Joint Localization and Synchronization via User&#13;
Cooperation in Non-Terrestrial Networks
Morrison, James C.
Next-generation (xG) wireless networks require accurate localization and synchronization for&#13;
efficient resource management and emerging applications. Non-terrestrial networks (NTN)&#13;
with low Earth orbit (LEO) satellites offer a promising alternative for positioning, navigation, and timing (PNT) by providing diversity and increasing the signal-to-noise ratio (SNR)&#13;
over global navigation satellite systems (GNSS). However, the primary challenge in NTNbased localization with LEO satellites is the lack of precise clock synchronization, which&#13;
introduces biases in time-of-arrival (TOA) measurements and limits localization accuracy.&#13;
This paper introduces a joint cooperative localization and synchronization (JCLS) framework that addresses this challenge through spatiotemporal cooperation, soft information,&#13;
and simultaneous synchronization. Furthermore, we propose a three-step algorithm for performing JCLS. The first step calculates a coarse position estimate using TOA measurements&#13;
and the Gauss-Newton method. Then, this coarse estimate is updated using the LevenbergMarquardt method which performs joint localization and synchronization. Finally, we derive a soft information-based filter that is used to continuously refine the position and clock error estimates as new measurements are available. We characterize the fundamental performance limits of JCLS using Fisher information, which offers insight into its localization and synchronization accuracy bounds. Furthermore, simulation results based on TOA measurements of the 3rd Generation Partnership Project (3GPP) 5G New Radio positioning&#13;
reference signal (PRS) demonstrate that the proposed algorithm for JCLS significantly improves localization and synchronization accuracy compared to non-cooperative methods.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multipartite Quantum Clock Synchronization Via Collective Symmetric States</title>
<link href="https://hdl.handle.net/1721.1/163012" rel="alternate"/>
<author>
<name>Keskin, Ufuk</name>
</author>
<id>https://hdl.handle.net/1721.1/163012</id>
<updated>2026-01-16T19:10:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multipartite Quantum Clock Synchronization Via Collective Symmetric States
Keskin, Ufuk
This thesis investigates multipartite quantum clock synchronization (QCS) tasks using a class of quantum states, called collective symmetric (CS) states, which generalize Dicke and N00N states. Employment of CS states in previous QCS procedures is shown to improve synchronization performance in various network scenarios. The focus of the paper is on QCS procedures that, after the distribution of quantum states, rely exclusively on local operations and classical communication (LOCC), ensuring compatibility with highly noisy quantum channels. Two synchronization scenarios are considered: (i) synchronization between the two nodes of an arbitrarily chosen pair of nodes, and (ii) global synchronization where all nodes wish to synchronize their clocks to a common average time. First, a framework in which the previous procedures operate employing the CS states is introduced. Using such framework, possible limitations of the QCS procedures in terms of estimation ambiguity and lack of robustness are pointed out. Second, a procedure referred to as the tactical delay procedure (TDP) is proposed for each of the two synchronization scenarios. The TDP resolves the mentioned limitations and outperforms the state-of-the-art multi-partite QCS procedures in terms of synchronization precision without requiring additional quantum resources.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Embedded HOWFSC Algorithms</title>
<link href="https://hdl.handle.net/1721.1/163011" rel="alternate"/>
<author>
<name>Eickert, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/163011</id>
<updated>2025-10-07T04:14:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Accelerating Embedded HOWFSC Algorithms
Eickert, Brandon
The quest to directly image planets of other solar systems demands not only state-of-the- art coronagraphs, but also places extreme performance demands on space-based processors. Direct imaging requires precise wavefront control to acquire the 1010 contrast necessary to reveal a dim, Earth-like exoplanet. This precise level of control is only possible if high-order wavefront sensing and control (HOWFSC) algorithms are executed with enough speed to offset wavefront error accumulation. Of the many aspects that make high-contrast imaging difficult, a central bottleneck is the speed at which we can run these algorithms. At the center of this work, we aim to accelerate the execution of two foundational HOWFSC algorithms: optical modeling and Electric Field Conjugation (EFC). Optical modeling underpins both Jacobian-based EFC, and a relatively new variant of EFC, called adjoint-based EFC.&#13;
The two main contributions of this thesis are to port bottleneck HOWFSC algorithms to the relevant computing environments, and quantify speedups attained by both algorithm choice and implementation optimization. This work explores the acceleration of optical modeling for a vector vortex coronagraph through the use of the FFTW library, and the acceleration of EFC by implementing adjoint-based EFC in an embedded context. We utilize functional analogs to radiation-hardened processors, using the NXP T1040 in place of the BAE RAD5545, and the NXP LS1046 in place of the LS1046-Space. We find that the FFTW library enabled a factor of six speedup for 4096 × 4096 fast Fourier transforms (FFTs), and a factor of five for 2048 × 2048 FFTs. With these significant speedups, the bottleneck within the vortex operations of the optical model shifts from the FFT to matrix multiplication. We additionally time the execution of the underlying routines of Jacobian-based EFC and AD-EFC to estimate that AD-EFC is 46 times faster than Jacobian-based EFC. Despite these speedups, AD-EFC is still a factor of 124 away from 100-second latency for our specific optical model. These results demonstrate that one to two orders of magnitude of speedup must be attained by either further optimizing algorithm implementations, or exploring other parallelization strategies, computing architectures, and mission paradigms to achieve a latency on the order of 100 seconds.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formalizing Causal Models Through the Semantics of Conditional Independence</title>
<link href="https://hdl.handle.net/1721.1/163010" rel="alternate"/>
<author>
<name>Zhang, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/163010</id>
<updated>2026-01-21T18:53:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Formalizing Causal Models Through the Semantics of Conditional Independence
Zhang, Anna
Many foundational tools in causal inference are based on graphical structure and can involve complex conditions that obscure the underlying causal logic. Given the inherent complexity and subtlety of cause-and-effect phenomena, establishing formal guarantees about these tools is both challenging and important. This thesis presents a semantics-driven formalization of causal models within the Coq proof assistant, enabling precise, mechanized reasoning about causal relationships. Central to this work is a new function-based definition of conditional independence, which captures how changes propagate through a causal graph. We prove that this semantic notion is equivalent to the standard graphical criterion of d-separation, thereby establishing a rigorous bridge between structural and semantic interpretations of independence. The formalization includes a library of graph-theoretic and causal-reasoning tools, encompassing key concepts such as mediators, confounders, and colliders. By linking the syntactic and semantic perspectives on causality, this work lays a robust foundation for formally verifying causal assumptions and guiding experimental design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination</title>
<link href="https://hdl.handle.net/1721.1/163009" rel="alternate"/>
<author>
<name>Zhang, Jackson</name>
</author>
<id>https://hdl.handle.net/1721.1/163009</id>
<updated>2025-10-07T04:14:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Contextual Knowledge Sharing in Multi-Agent Long-Horizon Planning Settings with Centralized Communication and Coordination
Zhang, Jackson
Embodied multi-agent systems, comprising autonomous agents interacting within shared environments, enable intelligent, collaborative solutions for tasks requiring real-time coordination and adaptability. While applications span diverse fields, from disaster response to healthcare, planning in these systems remains challenging due to partial egocentric observations and limited environmental awareness. This work addresses these challenges by introducing a software module that synthesizes a shared world state from individual agent views, maintaining spatial information about objects and agents to support more effective joint action planning. Integrated into the LLAMAR framework, this module aims to improve planning accuracy and efficiency. The proposed approach is evaluated using metrics such as success rate, transport efficiency, and coverage performance. Our evaluation demonstrates that utilizing a perfect (oracle-generated) world state significantly enhances planning effectiveness. Notably, under these ideal conditions, the success rate of the LLAMAR planner improved by over 16%. These findings underscore the critical impact of accurate world state representation on multi-agent performance and highlight the potential for significant advancements in collaborative task execution in dynamic, unstructured settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft</title>
<link href="https://hdl.handle.net/1721.1/163007" rel="alternate"/>
<author>
<name>Shafer, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/163007</id>
<updated>2025-10-07T04:14:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parametric Study of Novel Passive Thermal Control&#13;
Technology for Spacecraft
Shafer, Emma
Thermochromic variable emissivity materials (VEMs) are a relatively new passive thermal control technology used for spacecraft radiators. VEMs passively change their emissivity based on their temperature, with VEMs having low emissivity at low temperatures and high emissivity at high temperatures. This property of VEMs allows for spacecraft to have reduced heater power and less extreme temperature swings without adding active thermal control systems. There is a potential for VEM technology to become more widely used in spacecraft radiators. Because thermochromic VEMs are still a relatively new technology, there has not yet been a study with a parametric sweep of some possible VEM profiles and common spacecraft parameters to determine the best-case uses of particular VEM profiles. This thesis models a single-node spacecraft in an equatorial low Earth orbit, varying the spacecraft’s shape, surface area, and thermal mass using Thermal Desktop. The temperature history of the spacecraft in orbit, particularly its orbit minimum temperature, orbit maximum temperature, orbit average temperature, and orbit temperature range, is recorded, and twelve VEM profiles are compared against default black and white paint materials to see how the twelve VEM profiles change orbit minimum temperature, maximum temperature, average temperature, and temperature range. The desired outcome is for the VEMs to reduce the temperature range the most compared to black or white paint while keeping temperatures within typical temperature requirements for spacecraft components. It is found that, compared to white paint, VEMs always increase the orbit minimum temperature, maximum temperature, average temperature, and temperature range across all nodal thermal masses and surface areas studied. For spacecraft with lower surface areas, having only white paint decreases the temperature too much for typical spacecraft components, so even though white paint always decreases temperature range compared to VEMs, it is recommended to have VEMs instead of white paint for lower surface area spacecraft due to VEMs being better than white paint at keeping components within typical temperature requirements. When VEMs are compared to black paint, it is found that black paint has lower minimum temperatures and greater maximum temperatures than all VEMs at greater surface areas. For lesser surface areas, the node covered in black typically has minimum and maximum temperatures in the middle of the VEMs’ minimum and maximum temperatures. For all surface areas and thermal masses, the average temperature of the black node is typically in the middle of the average temperatures of the nodes with VEMs; in relation to the VEMs’ average temperatures, the black average temperature decreases as node height increases. For all node heights and thermal masses, VEMs always decrease the temperature range compared to black. VEMs are shown to be better than black paint in having spacecraft components stay within typical temperature requirements, and which VEM to choose depends on what the specific spacecraft component is and its specific temperature requirements. The biggest difference in individual VEM profiles compared to each other is the orbit average temperature; the lower the VEM’s transition temperature, the lower the average temperature. Only at the greatest nodal surface areas and smallest nodal heights is there a significant difference in temperature range between individual VEM profiles; typically, the lower the transition temperature of the VEM, the less its temperature range. Future work includes expanding on the parameters studied and studying spacecraft in different orbits, different spacecraft shapes, and different VEM profiles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies</title>
<link href="https://hdl.handle.net/1721.1/163006" rel="alternate"/>
<author>
<name>Ahlers, Matthew C.</name>
</author>
<id>https://hdl.handle.net/1721.1/163006</id>
<updated>2026-01-05T16:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Path Planning for Autonomous Sailing Vessels: Developing Robust and Efficient Survey Strategies
Ahlers, Matthew C.
Autonomous sailing vessels offer a promising solution for maritime research, offering low maintenance and sustainable platforms for environmental monitoring and data collection. These vessels utilize wind power, eliminating the need for conventional fuel and enabling long-duration operations with minimal environmental impact. Their applications range from oceanographic studies to maritime surveillance, where persistent and autonomous data collection is essential. This thesis explores the challenges and methodologies associated with path planning for autonomous sailing, particularly in the context of survey operations. Unlike traditional motorized vessels, sailing autonomy must account for wind variability, sail dynamics, and limited maneuverability, requiring specialized path-planning techniques to ensure efficient and reliable navigation. The research investigates various sail and hull configurations, the dynamics of windpowered propulsion, and the application of autonomy frameworks such as MOOS-IvP. A key focus is on optimizing continuous coverage path planning (CPP) to maximize efficiency while adapting to environmental constraints. By integrating real-time wind data and vessel performance characteristics, the study refines survey strategies that enhance mission effectiveness. Different survey strategies are implemented and evaluated using both simulation and real-world testing on the Charles River. These trials demonstrate the feasibility of fixed-path decomposition approaches and adaptive moving horizon control methods, evaluating methods with the impact of wind conditions on autonomous sailing performance. The results contribute to the development of robust and efficient survey strategies that improve the autonomy and reliability of wind-powered marine vessels.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming</title>
<link href="https://hdl.handle.net/1721.1/163004" rel="alternate"/>
<author>
<name>Hao, Yilun</name>
</author>
<id>https://hdl.handle.net/1721.1/163004</id>
<updated>2025-10-07T04:13:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reliable and Generalizable Real-World Planning with LLM-based Formalized Programming
Hao, Yilun
While large language models (LLMs) have recently demonstrated strong potential in solving planning problems, LLMs, as zero-shot planners themselves, are still not capable of directly generating valid plans for complex planning problems such as multi-constraint or long-horizon tasks. This motivates the needs to develop a robust and reliable planning system for complex real-world planning problems. Furthermore, many frameworks aiming to solve complex planning problems often rely on task-specific preparatory efforts, such as task-specific in-context examples and pre-defined critics or verifiers, which limits their cross-task generalization capability. This motivates the needs to extend the robust and reliable planning systems to have strong generalization capability. In this thesis, we first develop an LLM-based planning framework that formalizes and solves complex multi-constraint planning problems as constrained satisfiability problems and can reliably identify the unsatisfiable cores for unsatisfiable requirements, provide failure reasons, and offers personalized modification suggestions. Then, we generalize the paradigm by proposing a general-purpose framework that leverages LLMs to capture key information from planning problems and formally formulate and solve them as optimization problems from scratch, with no task-specific examples needed. Comprehensive experimental results have shown that our frameworks significantly outperform the baselines and have strong performance across tasks and LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency</title>
<link href="https://hdl.handle.net/1721.1/163002" rel="alternate"/>
<author>
<name>Plaza Rivera, Christian O.</name>
</author>
<id>https://hdl.handle.net/1721.1/163002</id>
<updated>2025-10-07T04:13:04Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Concentration-Dependent Thermodynamics and Kinetics in Lithium-Metal Battery Electrolytes: Implications for Coulombic Efficiency
Plaza Rivera, Christian O.
Lithium (Li)-metal batteries (LMBs) present a promising avenue for high-energy applications. However, their practical adoption is constrained by challenges such as dendrite formation and unstable interphases. This study investigates the intricate interplay between electrolytedependent thermodynamics, kinetics, and transport properties in LMBs, focusing on the concentration effects in fluoroethylene carbonate (FEC) and 1,2-dimethoxyethane-based electrolytes containing lithium bis(fluorosulfonyl)imide. Due to FEC’s unique properties, these electrolytes facilitate significant upshifts in the Li redox potential and contribute to stable interphases and voltage profiles. Our findings reveal that the redox potential is primarily governed by the solvent’s electron-donating ability, reflecting underlying solvation dynamics, while the electrolyte permittivity influences reaction entropy trends. The results show entropy changes from increased molecular disorder at moderate concentrations to reduced entropy in highly concentrated regimes, driven by the formation of ion–solvent complexes. Kinetic analyses demonstrate a volcano-shaped dependence of exchange current density on concentration, centered at 2 M. Two prevailing perspectives propose that either kinetic–transport interplay or thermodynamic properties govern Coulombic efficiency (CE). However, separating these contributions is complex, since both higher exchange current density and upshifts in the Li redox potential enhance CE. Furthermore, CE strongly aligns with the combined effects of kinetics, thermodynamics, and transport, emphasizing the need for a holistic electrolyte design approach. Optimizing these three factors makes it possible to stabilize the interphase, promote uniform Li deposition, and elevate the overall safety and performance of next-generation LMBs.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications</title>
<link href="https://hdl.handle.net/1721.1/163001" rel="alternate"/>
<author>
<name>Shevgaonkar, Mihir</name>
</author>
<id>https://hdl.handle.net/1721.1/163001</id>
<updated>2025-10-07T04:13:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">1500W High Voltage DC-DC Converter for Electroaerodynamic Aircraft Applications
Shevgaonkar, Mihir
Electroaerodynamic (EAD) propulsion is a novel form of propulsion that is nearly silent and has no moving parts. The first functional untethered heavier-than-air EAD aircraft had an endurance of 90 seconds and could only fly in a straight line. To enable a practical fixed wing EAD aircraft that can fly outdoors with a payload for an extended period of time, improved power conversion technology is necessary. Prior work specifies a practical EAD aicraft as one with an endurance of 10 minutes, a payload capacity of 200 g, and full controllability. This work explores methods of increasing the specific power of power converters for EAD aircraft from 1.15 kilowatts per kilogram to over 2.0 kilowatts per kilogram. Such an increase can be achieved by utilizing magnetics integration and thermal management techniques, as well as adjustments in the operating point of the power converter. The power converter for the first generation EAD aircraft had an input voltage of 200 V, an output voltage of 40 kV, an output power of 600 W, a specific power of 1.15 kilowatts per kilogram, and an efficiency of 85 percent. In this work, a power converter with an input voltage of 200 V, an output voltage of 20 kV, an output power of 1476 W, a specific power of 2.7 kilowatts per kilogram, and an efficiency of 96 percent was demonstrated to work for a 40 second duration. At the end of the test, device temperatures continued to increase, so it has not been proven that the converter can work in thermal steady state as required for a 10 minute flight. Future work would involve modifying the test setup to allow for adequate ventilation of the ambient air around the converter, as well as modifying the converter with adequate thermal management so as to enable operation under thermal steady state.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise</title>
<link href="https://hdl.handle.net/1721.1/163000" rel="alternate"/>
<author>
<name>Cezairli, Mina</name>
</author>
<id>https://hdl.handle.net/1721.1/163000</id>
<updated>2025-10-07T04:12:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Feasibility Analysis and Fuel Burn Benefits of Relaxing Constraints in High Altitude Cruise
Cezairli, Mina
Operational interventions, such as enabling more fuel-efficient trajectories, are desirable in mitigating the environmental impact of air travel due to their relatively fast implementation potential. In particular, the vertical inefficiency arising from the altitude stratification in the airspace can be mitigated by relaxing vertical constraints. The feasibility of vertical flexibility is evaluated by quantifying the rate of close encounters and the frequency of alerts that would be needed to prevent them. Substantial diurnal variability in the number of close encounters was found in the airspace, with lower rates of events during the nighttime period. Furthermore, regional differences among Air Route Traffic Control Centers were observed in the number of close encounters. The frequency of controller intervention events that would have to occur was evaluated at 25 NM and 50 NM alerting distance levels, and it was found that, given sufficient technological capabilities for alerting at the 25 NM reaction distance, most centers would have fewer than 10 alerts per hour during the nighttime period. Boston, Miami, and Seattle appeared especially promising, with approximately one alert per hour for each region. Finally, the potential fuel benefit from enabling vertically optimal trajectories was estimated to be up to 100,000 gallons of fuel savings per month in the case of a CONUS-wide nighttime implementation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions</title>
<link href="https://hdl.handle.net/1721.1/162999" rel="alternate"/>
<author>
<name>Zhang, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/162999</id>
<updated>2025-10-07T04:13:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Risk Management in Air Traffic Applications: Data-Driven Modeling, Prediction, and Generation of Realistic Weather Disruptions and Other Unfavorable Conditions
Zhang, Joseph
Understanding the interaction between weather and disruptions in complex air transportation network is important to the design and evaluation of preemptive measures and responses taken by air traffic managers. However, the occurrence of disruptive weather events is often rather limited compared to the amount of data available for nominal operations.  Additionally, in large-scale systems with many known and unknown confounding factors, it can be difficult to identify the relevance of existing data to different underlying distributions of interest. Furthermore, existing work generally follows a frequentist paradigm in predicting disruptions based on weather, and does not easily lend itself to inferring the causes of disruptions, which can be important both in building models and using them to make predictions, and generate test cases to stress-test proposed design decisions. In this thesis, we develop a hierarchical Bayesian model for air traffic network operations, and investigate methods for learning these models in data-constrained settings, by extend existing work on retrospectively analyzing failures. We also include a guiding case study performed on LaGuardia Airport, in which a generative model is developed for the interaction between weather conditions and airport-level parameters within a single airport, trained on unlabeled historical data, and evaluated by simulating disruptions on historical schedules.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS</title>
<link href="https://hdl.handle.net/1721.1/162998" rel="alternate"/>
<author>
<name>Wu, Ivy</name>
</author>
<id>https://hdl.handle.net/1721.1/162998</id>
<updated>2025-10-07T04:13:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interposing the Syscall Boundary: Transparent Python Execution in SigmaOS
Wu, Ivy
σOS aims to provide both serverless and stateful support to cloud applications while maintaining strong isolation, security, and efficient startup times and scheduling among multiple users. While σOS and its container startup times have been successfully benchmarked for tasks written, compiled, and statically linked in Golang and Rust, it currently lacks support for other languages, including interpreted ones like Python. To bridge this gap, this paper presents the first integration of an interpreted language into σOS, enabling native Python support without compromising the system’s core principles. Our design, σPy, achieves this through three key ideas: (1) system call interposition via LD_PRELOAD to enable just-in-time dependency management, where Python libraries are fetched on-demand from tenant-specified AWS S3 buckets, avoiding overhead during container initialization; (2) a multi-layered mount namespace that spans the local machine, a per-realm Docker container, and a per-proc σcontainer, enabling efficient dependency caching at the per-tenant granularity; and (3) a hybrid C++, C, and Python API layer that bridges σOS’s Protobuf-based RPC system with Python’s dynamic types. Preliminary benchmarks demonstrate that σPy achieves performance comparable to that of compiled languages like Golang when interacting with the σOS API, with only 0.2 - 0.3 additional milliseconds of overhead on all tested API calls, validating the success of Python programs on the σOS architecture.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating LLM Runtime Latency</title>
<link href="https://hdl.handle.net/1721.1/162997" rel="alternate"/>
<author>
<name>Wang, Sarah Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/162997</id>
<updated>2025-10-07T04:14:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Simulating LLM Runtime Latency
Wang, Sarah Y.
Large Language Models (LLMs) are expensive to run and can incur high latencies. Each LLM application has its own cost and latency targets. For example, AI voice assistants operate under low latency objectives, while large document batch processing jobs are typically cost-sensitive. However, navigating these trade-offs is not trivial, as LLM latency is highly task– specific and depends on factors such as the offered query load, the hardware configurations, request properties, and various model characteristics. To support the user in configuring their deployment according to their application needs, we introduce vLLMSim, an accurate simulator that estimates the latency of a given workload on different hardware configurations. vLLMSim advances two key avenues toward latency-aligned LLM deployments. First, the simulated latency metrics inform the user’s model and hardware choice, so they can use a configuration that is ideal for their workload. Second, our simulator enables researchers to quickly test latency-improving ideas, bypassing the need for time-consuming implementations before validating their effectiveness. In fact, vLLMSim is already used in two research projects with the goal of reducing latency and cost of LLM inference. In this thesis, we show how vLLMSim’s design allows it to accurately support the use cases above, while providing highly accurate runtime predictions. To support hardware exploration without GPU access, vLLMSim provides precomputed performance profiles that are sufficient to accurately simulate the user’s workload. The simulator code can be found here, and the instrumented vLLM code for creating profiles can be found here.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Latent Space Interpretation via In-the-loop Fine-Tuning</title>
<link href="https://hdl.handle.net/1721.1/162996" rel="alternate"/>
<author>
<name>Wen, Collin</name>
</author>
<id>https://hdl.handle.net/1721.1/162996</id>
<updated>2025-10-07T04:13:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Methods for Latent Space Interpretation via In-the-loop Fine-Tuning
Wen, Collin
With language models increasing exponentially in scale, being able to interpret and justify model outputs is an area of increasing interest. Although enhancing the performance of these models in chat mediums has been the focus of interaction with AI, the visualization of model latent space offers a novel modality of interpreting information. Embedding models have traditionally served as a means of retrieving relevant information to a topic by converting text into a high-dimensional vector. The high-dimensional vector spaces created via embedding offer a way to encode information that captures similarities and differences in ideas, and visualizing these nuances in terms of meaningful dimensions can offer novel insights into the specific qualities that make two item similar. Leveraging fine-tuning mechanisms, dimension reduction algorithms and Sparse Autoencoders (SAEs), this work surveys state-of-the-art techniques to visualize the latent space in highly interpretable dimensions. ConceptAxes, derived from these techniques, is a framework is provided to produce axes that can capture high-level ideas that are ingrained into embedding models. ConceptAxes with highly interpretable dimensions allow for better justification for the latent space and clusters. This method of increasing embedding transparency proves valuable in various domains: (1) AI-enhanced creative exploration can be more guided and customized for a particular experience and (2) high-level insights can be made more intuitive with vast text datasets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission</title>
<link href="https://hdl.handle.net/1721.1/162995" rel="alternate"/>
<author>
<name>Whitmore, Garrett</name>
</author>
<id>https://hdl.handle.net/1721.1/162995</id>
<updated>2025-10-07T04:13:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Commanding, Telemetry, and Software Strategy for&#13;
CubeSat Laser Infrared CrosslinK (CLICK) Mission
Whitmore, Garrett
This work outlines the software-related requirements necessary for successful operations of the NASA-sponsored Cubesat Laser Infrared CrosslinK (CLICK) B/C mission [1] [2]. This twin-cubesat mission will demonstrate peer-to-peer laser-communication capabilities novel at this small terminal scale. Optical laser communication terminals can have lower Size, Weight, and Power (SWaP) compared with traditional radio communication, as well as fewer licensing regulations and improved link security. CLICK-B/C follows from CLICK-A, a risk-reduction mission that successfully performed laser downlink with a ground station at MIT [3]. In addition to downlink, B/C will perform crosslink experiments at a data transmission rate over 20 Mbps at ranges between 20 and 580 km in Low-Earth Orbit (LEO). This thesis focuses on the software related to the function of the satellite payload, in particular, the improvements and additions made to the operating system, software systems that were ported over from CLICK-A, the integration and testing of these subsystems, and analyses done to prepare for in-flight operations before launch. An overview of the MIT &amp; UF payload hardware and electronics is given before detailing interactions with components as necessary. A deep dive into the payload software libraries, internal and external communication channels, and operating system build details are given. A description of functional testing and its results are laid out as well as a template crosslink experiment script and further specifications for mission-related analyses and pre-launch preparations. This work on software upgrades, verification, and examination is necessary for CLICK-B/C to reach its stated mission goals, here on Earth and in its orbit.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs</title>
<link href="https://hdl.handle.net/1721.1/162994" rel="alternate"/>
<author>
<name>Tockman, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/162994</id>
<updated>2025-10-07T04:14:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundational Verification of Running-Time Bounds for&#13;
Interactive Programs
Tockman, Andrew
The field of formal methods has a rich history of practical application in verification of the correctness of software. Existing verification tooling operates at a wide range of rigor, from proving relatively weak properties via traditional static analysis to powerful theorem provers that can express very precise specifications. It is sometimes desirable to prove properties about programs that make reference to not just semantic behavior but also to other metaproperties of the program’s execution, such as runtime or I/O histories. There is also a wide variety of existing tooling for proving bounds on program runtime. However, there is no prior work on a maximally rigorous verification system that can prove predicates involving all of semantic behavior, runtime, and I/O. Our contribution is exactly that – we extend the existing Bedrock2 framework, which implements a C-like systems language within a powerful proof engine together with a verified compiler capable of expressing arbitrary proof conditions involving behavior and I/O, and augment it to add the capacity to reason about runtime as well. As a capstone proof of concept, we apply the new metrics machinery to an IoT lightbulb controller (already verified with respect to the previous framework) and produce a new specification with time bounds based on arrival of network packets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task</title>
<link href="https://hdl.handle.net/1721.1/162992" rel="alternate"/>
<author>
<name>Rozario, Consecrata Maria</name>
</author>
<id>https://hdl.handle.net/1721.1/162992</id>
<updated>2025-10-07T04:13:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph Neural Networks for City Policy Recommendations&#13;
as a Link Prediction Task
Rozario, Consecrata Maria
Graph Neural Networks (GNNs) have become a widely utilized tool in recommender systems in various contexts. While recommendation tasks can be approached using a multitude of data structures and types, graph-structured data is particularly well-suited for this domain, as graphs naturally capture a variety of relationships and interactions between entities. By leveraging graph representation learning, we can effectively encode these complex dependencies, enabling robust and context-aware recommendations. We use this methodology in the domain of policy recommendations for urban centers. To recommend policies, we would learn the complex local and global relationships between cities, their environmental features, and currently implemented policies. We construct a graph structure relating cities, implemented policies, and city features, and formulate the policy recommendation task as a GNN link prediction problem, demonstrating its potential to scale data-driven urban governance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech</title>
<link href="https://hdl.handle.net/1721.1/162991" rel="alternate"/>
<author>
<name>Park, Janette H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162991</id>
<updated>2025-10-07T04:13:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automatic Detection of Landmark Acoustic Cues in&#13;
Human Speech
Park, Janette H.
This study presents a framework for the automatic detection of the eight landmark acoustic cues in human speech. Landmarks are key articulatory events, produced as a result of minimal vocal tract constriction (e.g., vowels and glides) or closures and releases in the oral region (e.g., nasal, fricative, and stop consonants). A complete landmark detection system is a key step towards an overarching speech analysis system that relies on lexical acoustic cues, as landmarks guide the identification of other acoustic cues in speech. In the proposed framework, the acoustic properties of each of the eight landmark cues are modeled by extracting speech-related measurements and training Gaussian Mixture Models (GMMs). To remove the effects of speaker variability and different recording environments, methods for normalizing speech-related measurements are proposed and evaluated. For a new speech signal, the normalized speech-related measurements are extracted at each time frame and evaluated against the eight trained GMMs to compute the likelihood of each landmark. Using Bayes’ Theorem, the posterior probabilities are calculated to determine the most probable landmark (or absence thereof) at each time frame. The system’s performance is evaluated by comparing the detected landmarks to the manually labeled ground truth landmark annotations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation</title>
<link href="https://hdl.handle.net/1721.1/162990" rel="alternate"/>
<author>
<name>Lin, Vincent</name>
</author>
<id>https://hdl.handle.net/1721.1/162990</id>
<updated>2025-10-07T04:13:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Single-Cell Language Model for Transcriptomics &amp; Cell Type Annotation
Lin, Vincent
As single-cell transcriptomics datasets continue to grow in size and biological complexity, current models for cell type annotation remain limited in their generalizability and are often evaluated on only a small fraction of the standardized cell types defined in modern ontologies. Current state-of-the-art models for transcriptomic representation demonstrate that deep learning models can extract rich features on single-cell data but are evaluated on very few cell types and perform poorly on broader datasets. This work introduces a multimodal model architecture that integrates large language models (LLMs) with gene expression encoders to address this scalability gap in cell type annotation. Inspired by vision-language frameworks, our architecture combines a pretrained scRNA encoder with a Perceiver Resampler that maps gene expression profiles into the latent space of a large language model. We construct structured, ontology-grounded datasets of up to 197 cell types and evaluate our model's performance using instruction fine-tuning. Our experiments analyze the impact of integrating language modeling components with scRNA encoders and their benefit on cell type annotation performance for large, diverse datasets. Our results show that while a scRNA encoder may be sufficient for small datasets, our single-cell model leveraging LLMs consistently outperforms the scRNA encoder baseline on larger datasets, with a widening gap in classification performance as data complexity increases, demonstrating the scalability and improved generalizability of our multimodal architecture. We also provide further analysis of the tradeoffs associated with using the natural language domain for biological analysis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference Time Search for Protein Structure Prediction</title>
<link href="https://hdl.handle.net/1721.1/162989" rel="alternate"/>
<author>
<name>Qi, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/162989</id>
<updated>2025-10-07T04:13:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Inference Time Search for Protein Structure Prediction
Qi, Richard
Scaling inference-time compute for deep learning models has led to superhuman performance in games and enhanced reasoning capabilities for language models. However, similar gains have not yet been made in the field of biomolecular structure prediction. We introduce a new paradigm for inference-time search by adding architectural components and a finetuning procedure to state-of-the-art structure prediction models that give rise to a discrete latent space. We implement algorithms for searching and sampling in this discrete latent space and conduct experiments on a small model, demonstrating an increase in oracle and top-1-selected accuracy for predicted protein-protein complex structures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight</title>
<link href="https://hdl.handle.net/1721.1/162988" rel="alternate"/>
<author>
<name>Chu, Kaitlyn A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162988</id>
<updated>2025-10-07T04:13:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing a Localized Lower Body Negative Pressure Garment for Long-Duration Spaceflight
Chu, Kaitlyn A.
Lower Body Negative Pressure (LBNP) has long been explored as a countermeasure to the physiological deconditioning and orthostatic intolerance associated with prolonged microgravity exposure. Traditional LBNP systems, however, are large, stationary devices that require astronauts to remain immobile during use, limiting their integration into daily spaceflight routines. Although more mobile LBNP solutions have emerged, they remain cumbersome and uncomfortable, ultimately still restricting multitasking and reducing operational feasibility. This study introduces the Soft Kinetics INterface (S.K.I.N.), a flexible, wearable structure designed to support the application of localized LBNP. The goal was to evaluate whether targeted negative pressure applied through the S.K.I.N. could replicate the fluid shift effects of a traditional LBNP chamber while improving comfort, mobility, and time-efficiency. The human thigh was chosen as the focus of this technology demonstration due to its known responsiveness to LBNP and its suitability for small-scale implementation. The development of the S.K.I.N. began with finite element modeling (FEM) to identify optimal material properties and structural geometry. Iterative physical prototyping resulted in a sinusoidal silicone waveform design, selected for its mechanical stability and user comfort. The final prototype was then evaluated in three experimental phases: (1) mechanical testing using pressure-sensitive film to assess structural integrity under vacuum, (2) an ex-vivo pig leg study to validate experimental protocols and assess the S.K.I.N.’s ability to induce fluid shifts, and (3) a human study (n=10) comparing fluid shifts between the S.K.I.N. and a scaled-down version of the traditional LBNP chamber. On average, results from the human study showed that the S.K.I.N. successfully induced localized fluid shifts similar to those of the chamber. However, response magnitude varied considerably across participants. Most of the observed effect was driven by female participants, who exhibited more pronounced fluid shifts, while most male participants showed minimal or no measurable response. FEM simulations supported this finding, suggesting that higher fat-tomuscle ratios — more common in women — may enhance tissue deformability and volume displacement, thereby facilitating greater fluid shifts under negative pressure. Although these differences limit generalizability, they also highlight the potential for the S.K.I.N. to serve as a more targeted countermeasure for specific physiologies or user groups. Although the current S.K.I.N. design’s limited surface area constrains its overall effect, the concept shows promise. The ability to deliver targeted fluid shifts in a more mobile, comfortable format could enable integration into dynamic operational settings. Future work should focus on expanding the system to cover larger areas, such as a whole-pants version, and incorporating a portable vacuum source for mobility in both spaceflight and terrestrial applications. Larger, more diverse participant cohorts will also be necessary to assess long-term usability, efficacy, and individual variability in response.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal</title>
<link href="https://hdl.handle.net/1721.1/162987" rel="alternate"/>
<author>
<name>Patterson, Lydia J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162987</id>
<updated>2025-10-07T04:13:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mantis: A Screen Magnification Tool for Diagram&#13;
Traversal
Patterson, Lydia J.
Complex diagrams and charts can be difficult for people who use screen magnification to navigate. A sense of spatial context and of the diagram’s overall structure is oftentimes lost, as magnifiers can only magnify a fraction of the screen at any given time. So, while sighted users have both clarity and full context simultaneously, screen magnifier users often have to choose or split their attention between the two. Existing screen magnifiers are content-agnostic, so the current way of navigating visualizations is freeform and unguided. The burden of figuring out where to explore while retaining a mental model of the diagram is placed entirely on the user. In this paper, we present Mantis—six prototypes of an automatic, content-aware screen magnification tool designed to aid people who have low vision in the traversal of diagrams. Each design experiments with what sorts of information might be provided to help the user retain a sense of context. Further, they each explore how such a tool might use its knowledge of the diagram’s semantic structure to streamline traversal to and from areas of interest to the user. To this end, we evaluate how these proof-of-concepts improve the user’s navigational experience and reduce the user’s cognitive load.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard</title>
<link href="https://hdl.handle.net/1721.1/162986" rel="alternate"/>
<author>
<name>Luong, Jacky K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162986</id>
<updated>2025-10-07T04:12:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Teacher-Centered Design in Educational Games: Iterative Improvements to the Tragedy of the Commons pSim Dashboard
Luong, Jacky K.
Teaching tools such as the Tragedy of the Commons (ToC) participatory simulation, developed by MIT STEP Lab, have the potential to develop different skills or knowledge compared to single-player educational games. ToC illustrates the challenges of managing shared resources, but its existing teacher dashboard may not be well-suited to support its growing use across various classrooms. Through surveying and interviewing educators along with observing classroom usage, the software's shortcomings and opportunities for improvement were identified. This resulted in the design and implementation of a redesigned teacher dashboard, including a new “central bank” feature that provides structure to support more complex simulations. Additional enhancements improved usability and performance. Evaluations with teachers and controlled playtests demonstrated that these changes show promise in enabling richer classroom dynamics and making facilitation easier. The findings underscore the importance of teacher experience in educational game design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists</title>
<link href="https://hdl.handle.net/1721.1/162985" rel="alternate"/>
<author>
<name>Liu, Andi</name>
</author>
<id>https://hdl.handle.net/1721.1/162985</id>
<updated>2025-10-07T04:13:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">All Therapies Are Equal - Unless You’re a Bot: Evaluating the Effectiveness of Four Therapy Schools for AI Chatbot Therapists
Liu, Andi
This thesis tests two design questions for Large Language Model (LLM) Chatbot Therapists: Which therapeutic school suits an LLM best, and does an explicit Theory-of-Mind (ToM) reflection improve outcomes? We prompted GPT-4.1-mini to act as eight therapists — CBT, Narrative, Psychodynamic, and SFBT, each with and without a ToM step — and held 240 simulated sessions with scripted AI patients. SFBT achieved the greatest projected PHQ-9 improvement (around 4 points), significantly higher than CBT, Narrative, or Psychodynamic approaches. Immediate distress (SUDS) fell modestly and uniformly across schools. ToM reasoning did not alter either measure. The findings show that extra “thinking time” might not automatically translate into therapeutic gain, but also highlight a current strength of LLMs: executing brief, rule-based therapies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Fiber Coupling with Actuated Mirrors</title>
<link href="https://hdl.handle.net/1721.1/162984" rel="alternate"/>
<author>
<name>Vel, Vetri Senthil</name>
</author>
<id>https://hdl.handle.net/1721.1/162984</id>
<updated>2025-10-07T04:13:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automated Fiber Coupling with Actuated Mirrors
Vel, Vetri Senthil
Almost all atomic physics experiments rely on precise alignment of lasers. For example, optical fields are used to cool, control, and image atoms in neutral atom arrays. In this thesis, we present a design for mirrors actuated by servos that allow the precise, repeatable alignment of lasers in free space optical setups. We then apply these actuated mirrors to automate fiber coupling, where laser beams are coupled from free space into a fiber waveguide. We present the theory of fiber coupling and use experimental data on the fiber coupling landscape to develop an accurate digital twin. Insights from the combination of the digital twin and experimental data are used to develop a fast and effective algorithm for automated fiber coupling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ACED: Automatic Concourse Event Detection</title>
<link href="https://hdl.handle.net/1721.1/162983" rel="alternate"/>
<author>
<name>Wagner, Luke A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162983</id>
<updated>2025-10-07T04:12:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">ACED: Automatic Concourse Event Detection
Wagner, Luke A.
Fans of the San Antonio Spurs often face long delays when traversing the arena or waiting for food. Automatic Concourse Event Detection (ACED) is a novel system designed for tracking these statistics in the Spurs’ arena in real time. We use existing machine learning models and introduce novel processing algorithms to identify the total number of people in each section throughout the arena in addition to tracking the wait times for different restaurants and restrooms. ACED collects and stores this information in a database, which could be used to present fans with up-to-date arena information in a live dashboard to assist them in their in-game decision making. This would improve the overall fan experience, which could encourage fans to buy tickets more frequently. We provide the San Antonio Spurs with a completed implementation of ACED, which is ready to be deployed within the Spurs’ arena.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)</title>
<link href="https://hdl.handle.net/1721.1/162982" rel="alternate"/>
<author>
<name>Seeyave, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/162982</id>
<updated>2025-10-07T04:13:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ultraviolet-C Powered Air Purifying Respirator (UVC&#13;
PAPR)
Seeyave, Evan
The global challenge posed by pandemics, notably COVID-19, has underscored the critical need for advanced personal protective equipment (PPE). This thesis details the development and evaluation of a multi-stage powered air-purifying respirator (PAPR) incorporating direct ultraviolet-C (UVC) germicidal irradiation. The proposed PAPR aims to provide enhanced protection by actively sterilizing air through this UVC chamber immediately prior to inhalation. This approach offers an advantage over traditional filter-based PAPRs by removing both the need to replace filters and pull air with high-power motors, while still neutralizing a broad spectrum of airborne pathogens, including viruses and bacteria. The primary objective of this research is to design, construct, and test a PAPR prototype capable of achieving a high inactivation rate (target 99.9%), thereby offering a robust solution for individuals in high-exposure environments. In addition to the UVC chamber, we also built an alternate ultraviolet-A (UVA) activated titanium dioxide (TiO2) photocatalytic oxidation (PCO) chamber. This work encompasses the overall design of the system, safety considerations, and testing to quantify its pathogen inactivation efficacy and to characterize system performance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music</title>
<link href="https://hdl.handle.net/1721.1/162981" rel="alternate"/>
<author>
<name>Shi, Iris</name>
</author>
<id>https://hdl.handle.net/1721.1/162981</id>
<updated>2025-12-15T15:52:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GridFix: A Desktop Application for the Correction of Algorithmically-Generated Beatgrids for Music
Shi, Iris
Beatgridding is a technique meant to aid DJs in aligning the beats of two different songs. By overlaying a grid of beat markers (a “beatgrid”) on top of a waveform representation of the track being beatgridded, a song’s beats can be visualized and thus easily matched to another’s. State-of-the-art DJ software—like rekordbox by the company AlphaTheta—will algorithmically generate beatgrids for songs. However, these beatgrids are not always accurate and can often be difficult to correct with only the software-provided tools. GridFix is a desktop application designed to be an auxiliary tool for rekordbox, allowing users to correct rekordbox-generated beatgrids by providing additional functionality that rekordbox does not. GridFix’s main advantage is its ability to let users make local changes to small, isolated sections of a beatgrid, a task that is quite hard to achieve in rekordbox. GridFix is fully compatible with rekordbox and fairly easy to learn how to use, as shown by user testing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Graph Metrics for Improving Cybersecurity on Software Dependency Networks</title>
<link href="https://hdl.handle.net/1721.1/162980" rel="alternate"/>
<author>
<name>Yao, Darren Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/162980</id>
<updated>2026-01-16T20:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Graph Metrics for Improving Cybersecurity on Software Dependency Networks
Yao, Darren Z.
Modern software ecosystems are deeply interconnected, allowing a vulnerability in a single component to propagate and affect many others. In this thesis, we model software ecosystems as directed graphs, and apply various graph-theoretic metrics to quantify security risk. We compare two deep learning frameworks (PyTorch and TensorFlow) with two traditional software frameworks (npm and PyPI), identifying critical properties of their dependency structures, which motivates several recommendations for improving software supply chain security.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning Robotic Cutting Operations</title>
<link href="https://hdl.handle.net/1721.1/162979" rel="alternate"/>
<author>
<name>Lunawat, Tarang</name>
</author>
<id>https://hdl.handle.net/1721.1/162979</id>
<updated>2025-10-07T04:12:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Planning Robotic Cutting Operations
Lunawat, Tarang
Classical planning and most PDDL variants operate on the assumption that the number and types of objects present in the environment are known at the time of initialization and neither can nor do change during plan execution. However, there are many domains in which it is helpful and necessary to be able to capture action (or environment) effects that are able to change the existence of objects rather than just facts about these objects. PDDLStream already provides a framework for "certifying" new facts about the environment as necessary throughout plan execution; I propose using PDDLStream to construct a principled way to reason over not just added facts, but also added or removed objects in the environment. In order to do this, I will work within the domain of cutting operations in the kitchen, as this is a domain that both necessitates a lot of object change as objects are cut and often requires chains of these generated objects to be fully reasoned over. Additionally, I will lay the groundwork to use this principled way to reason over new objects to implement different types of cutting operations in the kitchen, with the eventual goal of a robot planner being able to sequence different provided actions to more efficiently work with knives in the kitchen in a human-like manner.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets</title>
<link href="https://hdl.handle.net/1721.1/162978" rel="alternate"/>
<author>
<name>Manojkumar, Saikrishna</name>
</author>
<id>https://hdl.handle.net/1721.1/162978</id>
<updated>2025-10-07T04:15:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Adaptive Wavefront Estimation Algorithms for&#13;
High-Contrast Imaging of Exoplanets
Manojkumar, Saikrishna
The direct imaging of exoplanets orbiting stars outside our solar system remains one of the crucial tools we have available to answer whether there exists life beyond Earth. The light from an Earth-like exoplanet is approximately ten orders of magnitude dimmer than its host star and hence the imaging system of the telescope observing the exoplanet must be able to suppress the starlight to achieve a “contrast” of 10−10 in the image. This is typically achieved using a coronagraph, which blocks the light from the star while allowing the light from the planet to pass through. However, some starlight that leaks through the coronagraph needs to be further removed in the search region for the exoplanet; this region is referred to as the dark hole or dark zone (DZ). Creating a DZ requires the use of focal plane wavefront sensing and control techniques, which estimates the electric field of the starlight in the focal plane of the telescope using a camera and then informs the deformable mirrors (DMs) located upstream of the coronagraph to null these electric fields. Once the DZ is created with a desired contrast, there are still slow, high-order drifts in the optical system that cause the contrast to degrade over the long observation times of the science target. High-order wavefront sensing and control (HOWFSC) techniques are required to maintain the contrast in the DZ while observing a science target. Dark Zone Maintenance (DZM) is a technique that has demonstrated the ability to maintain the contrast in the DZ over long observation times. This algorithm utilizes an Extended Kalman Filter (EKF) to estimate the open-loop electric field at every pixel in the DZ and use this information to inform the control algorithm. The achievable contrast and contrast stability of DZM are determined by several key parameters: the optical system’s drift rate, the photon flux and associated shot noise in the measurement images, and the probe magnitude applied to the DMs for the estimation algorithm. This work quantifies the impact of the drift rate, photon rate, and probe magnitude on the performance of DZM by performing a parameter scan on high-contrast imaging testbeds. The parameter scan was performed on both the in-air High-contrast imager for Complex Aperture Telescopes (HiCAT) testbed at the Space Telescope Science Institute (STScI) and the in-vacuum Decadal Survey Testbed (DST) at the Jet Propulsion Laboratory (JPL). The parameter scan was run in both simulation and on the physical testbed using the contrast in the DZ as a performance metric, and evaluated relative to the photon-noise theoretical bounds to assess the efficacy of the DZM algorithm. The substantial difference between the theoretical bounds and experimental results, on average 70 times worse on HiCAT, motivated the development and implementation of a new DZM algorithm that utilized a separate EKF to estimate the modes of wavefront error derived from the DMs and use that information to correct for the aberrations. This new modal EKF algorithm was tested with a similar parameter scan on the HiCAT simulator demonstrating a nearly 5 times level of improvement relative to the original DZM algorithm simulation performance. The results of this work will inform the design of future algorithms to maintain high contrast during observations for upcoming space telescope missions such as the Habitable Worlds Observatory (HWO).
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Incentivizing Data Contributions in Decentralized Collaborative Learning</title>
<link href="https://hdl.handle.net/1721.1/162977" rel="alternate"/>
<author>
<name>Wang, Yuxiao</name>
</author>
<id>https://hdl.handle.net/1721.1/162977</id>
<updated>2025-12-15T15:42:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Incentivizing Data Contributions in Decentralized Collaborative Learning
Wang, Yuxiao
In a collaborative learning scheme such as the federated learning model, each user benefits from the data contribution of others. Previous work shows that the federated learning protocol can incentivize users to contribute more than in the competitive equilibrium by penalizing deviations. However, a central controller with access to all the data may raise privacy concerns. In this work, we construct a decentralized collaborative protocol in which users share data without relying on a centralized controller. We then extend this protocol to a repeated game and analyze the competitive equilibrium behavior, along with strategies users can implement to foster collaboration in the repeated setting of the protocol. We provide a quantitative analysis of free-rider behavior under decentralized protocols and compare the amount of information collected with decentralized protocols against that in the centralized protocol.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams</title>
<link href="https://hdl.handle.net/1721.1/162976" rel="alternate"/>
<author>
<name>McMenamy, Josiah</name>
</author>
<id>https://hdl.handle.net/1721.1/162976</id>
<updated>2025-10-07T04:14:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DisViz: Visualizing real-world distributed system logs&#13;
with space time diagrams
McMenamy, Josiah
This thesis aims to provide an intuitive debugging and learning tool for distributed systems that communicate by message passing. Understanding and debugging distributed systems can be challenging and slow to iterate on, so there is a need for tools that can speed up the time it takes to diagnose the root cause of a bug. There exists significant prior work in creating tools that can aid in the visualization and debugging of distributed system executions, such as the ShiViz log visualizer [13]. This work builds on top of these tools to provide more debugging information, handle large log files, and be easily instrumented in existing systems. We demonstrate using the tool to debug issues in an implementation of the Raft consensus algorithm [34].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring</title>
<link href="https://hdl.handle.net/1721.1/162975" rel="alternate"/>
<author>
<name>Nori, Divya</name>
</author>
<id>https://hdl.handle.net/1721.1/162975</id>
<updated>2025-12-15T17:19:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Casting Protein Structure Predictors as Energy-Based Models for Binder Design and Scoring
Nori, Divya
Protein binder design has been transformed by hallucination-based methods that optimize structure prediction confidence metrics, such as the interface predicted TM-score (ipTM), via backpropagation. However, these metrics are imperfect proxies for binding affinity and do not reflect the statistical likelihood of a binder–target complex under the learned distribution. In this work, we propose a principled alternative: an energy-based framework that directly extracts the statistical likelihood of a predicted binder–target complex from a structure predictor’s internal confidence distributions. Building on the Joint Energy-based Modeling (JEM) framework, we introduce pTMEnergy, a statistical energy function over structures that is derived from predicted inter-residue error distributions. We incorporate pTMEnergy into BindEnergyCraft (BECraft), a hallucination-based binder design pipeline that maintains the same optimization framework as BindCraft but replaces ipTM with our energy-based objective. Across a diverse panel of challenging protein targets, BECraft achieves higher in silico success rates compared to BindCraft, RFDiffusion, and ESM3. Beyond design, we evaluate pTMEnergy as an unsupervised scoring function for retrospective virtual screening tasks. Without any task-specific supervision or retraining, pTMEnergy consistently outperforms baseline methods across both protein–protein and protein–RNA interaction benchmarks. Our results demonstrate that confidence-derived energy functions offer a powerful and generalizable signal for binder design and scoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays</title>
<link href="https://hdl.handle.net/1721.1/162974" rel="alternate"/>
<author>
<name>Ouko, Edwin O.</name>
</author>
<id>https://hdl.handle.net/1721.1/162974</id>
<updated>2025-10-07T04:14:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Modeling, Optimization, and LLM-Assisted Decision Support for Geothermal Well Arrays
Ouko, Edwin O.
Geothermal well arrays, which organize multiple geothermal wells into carefully planned geometric configurations, provide an opportunity to enhance energy production capacity and increase fault tolerance of geothermal systems. Closed-loop geothermal systems (CLGS), a type of geothermal well design, promises to allow harnessing of geothermal energy in any location with minimal adverse environmental impact. I demonstrate how the development of these emerging geothermal technologies could be accelerated by recent advances in large language models (LLMs) in conjunction with high-level high-performance programming languages like Julia. In particular, I focus on how LLMs could be used in design brainstorming and to increase efficiency in numerical modeling. I assess the potential of state-of-the-art LLMs such as ChatGPT, Gemini, Claude, Grok, and a domain-specific model, AskGDR, as expert assistants in geothermal research. Owing to the unpredictable reliability of LLMs, there is a constant need for objective evaluation benchmarks in various domains. I propose a novel approach, leveraging Google’s recently introduced AI tool, NotebookLM, to accelerate the generation of quantitative geothermal benchmarks with only new unpublished questions. In addition, I propose the use of blackbox optimization as a computationally less costly alternative to approximate the optimal configuration of CLGS wells in a geothermal array to minimize thermal interference and improve heat energy production. I evaluate several optimization strategies such as Bayesian optimization, particle swarm optimization, natural evolution strategies, differential evolution optimization, Nelder-Mead, and simulated annealing on various performance characteristics such as convergence speed and highest production capacity attained.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis</title>
<link href="https://hdl.handle.net/1721.1/162973" rel="alternate"/>
<author>
<name>Medearis, Nicholas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162973</id>
<updated>2025-10-07T04:14:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Transformer-Based Foundation Model for Human&#13;
Microbiome Analysis
Medearis, Nicholas A.
The human microbiome plays a crucial role in maintaining our health. Alterations in the microbiome have been linked to various chronic conditions like autoimmune disorders, metabolic diseases, and cancer. While various tools have been developed to study the microbiome, each tool tends to be specialized for a specific task. To overcome this limitation, we report on the development of a foundation model pretrained on 13,524 human microbiome metagenomic samples. The model was then fine-tuned to predict the clinical status of the host. Our model was able to differentiate between healthy and diseased samples in 10-fold cross-validation on the training dataset with an accuracy of 83.7%. On an external validation dataset of 927 samples, our model had an accuracy of 74.9%. Notably, our model performed even better at differentiating diseases from one another. On the diseased samples in the training dataset, it classified samples with an accuracy of 93.3% in 10-fold cross-validation. Together, our results show that generative AI has the potential to transform microbiome research and advance personalized medicine.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner</title>
<link href="https://hdl.handle.net/1721.1/162972" rel="alternate"/>
<author>
<name>Mueller, David</name>
</author>
<id>https://hdl.handle.net/1721.1/162972</id>
<updated>2025-10-07T04:13:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards an Augmented Reality-based Cyber-Physical&#13;
Production System Planner
Mueller, David
Investment in automation by small and medium-sized enterprise (SME) manufacturers in the United States has lagged behind their larger counterparts for decades, despite comprising a majority of the nation’s manufacturing industry. The cyber-physical production systems (CPPSs) introduced by Industry 4.0 promise to bolster productivity and efficiency, but only for those enterprises which invest in constituent technologies. These technologies are not easily integrated in existing factories, typically requiring installation of invasive infrastructure and continuous technical support. Robotic integration is typically performed by specialized third-party firms or by in-house staff with extensive technical training, such as engineers. SME manufacturers are particularly sensitive to the complexities of robot integration due to limited access to technologists, and their need for frequent reconfiguration under economies of scope. This thesis introduces Marve: the Mobile Augmented Reality Visual Editor. Marve is a proof-of-concept Android application that enables line workers to directly configure and control an autonomous mobile robot (AMR)-backed hybrid intralogistics system using lowcost consumer hardware. Workers can use Marve’s augmented reality (AR)-based interface to define and visualize the essential geometry and components of such a system. Once configured, workers are able to simulate how the system would respond to their requests to move material throughout the factory. The use of AR enables extensive work to be done at the planning stage of CPPS integration by line workers themselves, bypassing the need for modeling by engineers. Marve relies exclusively on fiducials and visual-inertial odometry (VIO) for localization, and fiducial tags for object tracking, thus eliminating the need for supporting infrastructure. Taken together, these features make Marve an easy on-ramp for SMEs seeking to transition legacy production lines into the CPPSs of Industry 4.0.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling</title>
<link href="https://hdl.handle.net/1721.1/162971" rel="alternate"/>
<author>
<name>Liu, Katie</name>
</author>
<id>https://hdl.handle.net/1721.1/162971</id>
<updated>2026-01-16T19:18:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling Efficient ML Inference in SigmaOS with Model-Aware Scheduling
Liu, Katie
Machine learning inference in multi-tenant cloud environments leads to significant challenges when it comes to minimizing latency and resource contention, especially as models grow in size and complexity. This thesis addresses the cold start overhead and scheduling inefficiencies of multi-tenant ML serving by integrating the RayServe distributed model-serving framework into σOS, a cloud operating system that unifies container and serverless paradigms. The thesis also proposes two model-aware schedulers within σOS that intelligently routes inference requests to reduce the number of cold starts: Model Colocation, which prioritizes placing requests on machines where the required model is already loaded, and Centralized Model Registry, which tracks globally available models to inform scheduling decisions. These policies proactively reduce model load times by reusing cached models. Experimental results on language translation workloads in an 8-node cluster show that these schedulers achieve a ≈ 50% reduction in average inference latency and eliminates roughly 4–5 cold starts per workload, compared to σOS’s default scheduler. Through this model-aware approach to scheduling, our work enables more efficient, scalable, and low-latency ML inference serving in multi-tenant cloud settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality</title>
<link href="https://hdl.handle.net/1721.1/162970" rel="alternate"/>
<author>
<name>Shukla, Aditeya</name>
</author>
<id>https://hdl.handle.net/1721.1/162970</id>
<updated>2025-10-07T04:14:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact of efficiency-driven aircraft technology&#13;
improvements on climate and air quality
Shukla, Aditeya
The impacts of commercial aviation on global climate and air quality have led to an industry-wide movement to reduce its environmental impact. While technological developments in aircraft propulsion, materials, and aerodynamics aim to reduce fuel consumption and CO₂ emissions, these efforts often overlook the full climate and air quality impacts of aviation, especially emissions impacts of NOₓ, CO, HC, soot, and contrails. This study assesses the environmental constraints associated with advancements driven by fuel efficiency by modeling aircraft technologies across narrow-body, wide-body, and regional jet categories. By focusing on near-future technology insertions in materials, aerodynamics, and propulsion, we can compute quantifiable environmental metrics such as temperature changes, global warming potentials, and monetized environmental damages. Our modeling shows that certain propulsion technologies — such as increased component polytropic efficiencies or higher allowable turbine-metal temperatures — can reduce fuel consumption by more than 10% under favorable re-optimizations of engine design. However, they often raise engine core pressures or temperatures in ways that increase NOₓ emissions indices by more than 30%. This can lead to worse air quality damages, offsetting some of the CO₂ savings and in some cases result in a 2% increase in environmental damages on a total net present value (NPV) basis. Primary structure material upgrades consistently reduce both fuel burn and NOₓ emissions. These improvements in air quality from reduced NOₓ result in a 10% reduction of the total NPV from environmental impacts. This analysis shows that focusing on fuel efficiency alone can be an incomplete metric towards understanding the environmental impact of an aircraft. By offering a quantitative assessment of how near-future upgrades can affect both climate and air quality, this study also provides guidance on which technology paths are most effective in reducing the overall environmental impact of aviation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization</title>
<link href="https://hdl.handle.net/1721.1/162968" rel="alternate"/>
<author>
<name>Xu, Jessica J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162968</id>
<updated>2025-10-07T04:14:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Symbol Digit Test: Multimodal Behavior Detection and Visualization
Xu, Jessica J.
Neurodegenerative diseases, such as Alzheimer’s, impact many people worldwide and currently have no cure, making early detection essential for effective symptom management and intervention. Traditional diagnostic practices often rely on subjective clinical evaluations that can vary between practitioners, highlighting the need for more objective methods. The digital Symbol Digit Test (dSDT), administered via the Cognitive Health App on an iPad and using the ETVision Eye Tracking System, aims to provide an automated, reliable method to analyze patient cognitive function to detect early signs of impairment through capturing handwriting and gaze data. This thesis builds upon previous work by automating the synchronization of these two data modalities, refining definitions of learning behaviors, and developing pipelines for data processing and visualization. By creating a synchronized multimodal dataset, we can visualize participant behavior for more intuitive interpretation and draw meaningful conclusions. These contributions provide an end-to-end framework for analyzing behavior during the cognitive assessment and lay the groundwork for future development of diagnostic models to detect early signs of neurodegenerative diseases.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Location Verification for Spoofing Detection in Non-Terrestrial Networks</title>
<link href="https://hdl.handle.net/1721.1/162967" rel="alternate"/>
<author>
<name>Schatz, Ensign Nathan Caleb</name>
</author>
<id>https://hdl.handle.net/1721.1/162967</id>
<updated>2025-10-07T04:14:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Location Verification for Spoofing Detection in Non-Terrestrial Networks
Schatz, Ensign Nathan Caleb
Reliable location awareness is essential for the development of new services and applications in non-terrestrial networks (NTN). The ability of malicious users to report false location information poses a significant threat to NTN performance. This threat introduces the need for a flexible and robust location verification system (LVS) that can reliably detect malicious users. This paper proposes a single-satellite LVS based on round-trip time and angle-of-arrival measurements. We characterize several sources of uncertainty unique to the NTN scenario and examine their combined effect on positioning error. To detect spoofing probabilistically, we approximate the likelihood function for the unknown user position using a Gaussian mixture model and employ a likelihood ratio decision rule for location verification. Results display receiver operating characteristic curves to evaluate the LVS performance under various satellite ephemeris error conditions, spoofing distances, number of measurements available to the system, and wireless channel properties. The proposed LVS is shown to reliably detect spoofing among malicious users.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Triangle Splatting</title>
<link href="https://hdl.handle.net/1721.1/162966" rel="alternate"/>
<author>
<name>Xu, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/162966</id>
<updated>2025-10-07T04:14:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Triangle Splatting
Xu, Daniel
We develop a differentiable rendering method for recovering 3D meshes of scenes from 2D images. Unlike existing approaches, our method does not rely on a differentiable renderers and is compatible with any standard mesh rasterizer. To our knowledge, it is the first mesh-based differentiable rendering method that is not reliant the use of visibility masks entirely. Beyond these conceptual advancements, we implemented a set of highly optimized kernels that enable efficient scene representation on a sparse voxel grid, effectively overcoming the cubic scaling bottleneck faced by similar methods. These innovations result in promising performance on unbounded real-world scenes with complex backgrounds.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing</title>
<link href="https://hdl.handle.net/1721.1/162965" rel="alternate"/>
<author>
<name>Ortiz, Ciarra Celena</name>
</author>
<id>https://hdl.handle.net/1721.1/162965</id>
<updated>2026-01-16T19:55:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Adaptive Control Strategies for Mitigating Spaceflight Fluid Shifts Using Lower Body Negative Pressure and Non-Invasive Fluid Shift Sensing
Ortiz, Ciarra Celena
Entering a microgravity environment induces cephalad fluid shifts that can lead to cardiovascular and renal-hormonal adaptations that can effect astronaut health and performance in space. The current monitoring strategies for fluid shift lack the ability to track regional fluid shift in real-time, which limits countermeasure efficacy. This thesis aims to highlight the investigation and validation of using prototype non-invasive radiofrequency (RF) sensors for regional fluid shift detection. Additionally, the integration of the feedback from these sensors into Lower Body Negative Pressure (LBNP) chambers could allow for the development of an adaptive Lower Body Negative Pressure regulation framework. Coaxial RF sensors were designed and characterized using tissue phantoms, and tested in a human subject study involving controlled LBNP exposure. Reflection coefficients (S₁₁ and S₂₂) were analyzed to detect regional fluid changes in arm and leg tissue. The preliminary results indicated a statistically significant decrease in the arm reflection coefficients (S₁₁) during active LBNP, which is consistent with fluid being pulled towards the lower body. The leg reflection coefficients (S₂₂) were more variable and did not exhibit statistically significant results, suggesting a need for more investigation with placement and sensor sensitivity. This work demonstrates the potential of using wearable RF sensors for non-invasive fluid shift monitoring and lays the foundation for integrating fluid sensor feedback into adaptive LBNP control protocols to improve astronaut health monitoring and countermeasure personalization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Inference via Optimal Transport Ambiguity Sets</title>
<link href="https://hdl.handle.net/1721.1/162964" rel="alternate"/>
<author>
<name>Wang, Zheyu</name>
</author>
<id>https://hdl.handle.net/1721.1/162964</id>
<updated>2025-10-07T04:14:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Robust Inference via Optimal Transport Ambiguity Sets
Wang, Zheyu
Uncertainty quantification is pivotal for ensuring the safety and reliability of predictive algorithms in high-stakes applications—ranging from cancer diagnosis to autonomous driving. This challenge is exacerbated by distribution shift, in which the true data–generating distribution diverges from the nominal distribution on which our statistical methods were trained. In this thesis, we formalize distribution shifts via ambiguity sets—metric neighborhoods in the space of probability measures defined by distances such as the Wasserstein metric—and demonstrate that leveraging these ambiguity sets endows two widely used statistical algorithms with distributional robustness. The Kalman filter enables accurate, real-time tracking of latent states by assimilating noisy, indirect measurements over time. Its performance relies on precise state-space models for both the evolution dynamics and the observation process. In practice, uncertainties in these models introduce errors that can significantly degrade filter accuracy. Here, we review two robust Kalman-filter variants that explicitly account for such errors via Wasserstein ambiguity sets. Split conformal prediction, hereafter referred to as conformal prediction, offers a powerful framework for quantifying predictive uncertainty by constructing prediction intervals with finite-sample, distribution-free guarantees. Despite its widespread success, ensuring its validity under train-test distribution shifts remains a significant challenge. We model distribution shifts using ambiguity sets defined by two optimal transport-based metrics and propose two robust conformal prediction algorithms that preserves validity under these shifts. First, we consider ambiguity sets defined by a pseudo-divergence derived from the LévyProkhorov (LP) metric, which captures both local and global data perturbations. We provide a self-contained overview of LP ambiguity sets and their connections to widely used metrics such as the Wasserstein and Total Variation distances. We then establish a natural link between conformal prediction and LP ambiguity sets: by propagating the LP ambiguity set through the scoring function, we reduce complex high-dimensional distribution shifts to manageable one-dimensional shifts, enabling exact computation of the worst-case quantile and coverage. Building on this foundation, we develop valid robust conformal prediction intervals under distribution shifts, explicitly relating LP parameters to interval width and confidence levels. Experimental results on real-world datasets demonstrate the effectiveness of the proposed approach. Next, we extend our analysis to robust conformal prediction over Wasserstein-2 ambiguity sets, deriving a theoretical characterization of the worst-case quantile. However, we identify intractability due to the dependence on the shape of the original score CDF and conclude with potential future directions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theoretical Limits of Quantum Ranging</title>
<link href="https://hdl.handle.net/1721.1/162963" rel="alternate"/>
<author>
<name>Kartal, Bünyamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162963</id>
<updated>2025-10-07T04:14:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Theoretical Limits of Quantum Ranging
Kartal, Bünyamin
The ability to determine distances from dedicated measurements, namely active ranging, is crucial in a variety of systems including localization, radar, and lidar. This thesis establishes the quantum limits and determines the quantum advantage provided by single-beam displaced squeezed states in active ranging. Analytical expressions of the quantum Fisher information (QFI) are provided for monochromatic and continuous-mode waves passing through a thermal loss channel with arbitrary loss and noise conditions. The optimal allocation of system resources for performing displacement and squeezing operations is determined. The optimal allocation consists of apportioning all resources to perform either the displacement operation, providing no quantum advantage, or the squeezing operation. Analytical results are examined in optical and microwave regimes. The optimal gain, i.e., the ratio between the QFI obtained by optimal resource allocation and the QFI obtained by performing only the displacement operation, is derived for the optical and microwave regimes. Quantum advantage afforded by the prototypical heterodyne receiver is also investigated. The results of this thesis pave the way for establishing a foundation of active ranging and provide insights for system design employing currently available quantum technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generalized Policy Learning with Planning</title>
<link href="https://hdl.handle.net/1721.1/162962" rel="alternate"/>
<author>
<name>Yang, Ryan P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162962</id>
<updated>2025-10-07T04:14:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Generalized Policy Learning with Planning
Yang, Ryan P.
Generalized policy learning seeks to find policies that solve multiple tasks within a planning domain. We introduce methods to search for policies independently in a domain from empty initialized policies. As an extension, we also propose a problem setting to learn satisficing policies between domains. In an independent domain, we propose a score function to guide the policy search. Our approach, Policy-Guided Planning for Generalized Policy Generation (PG3), evaluates policies based on how well it can be used to plan. Empirically, we show that PG3 allows generalized policy learning to occur more efficiently than other baselines with PDDL-based problems and policies represented as lifted decision lists. Finally, our experiments show that policies independently learned are qualitiatively similar, prompting further investigation on the possibilities of further accelerating the policy search process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meta-Learning Exploration Strategies with Decision Transformers</title>
<link href="https://hdl.handle.net/1721.1/162961" rel="alternate"/>
<author>
<name>Welch, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162961</id>
<updated>2025-10-07T04:14:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Meta-Learning Exploration Strategies with Decision Transformers
Welch, Ryan
The problem of pure exploration in sequential decision-making is to identify strategies for efficiently gathering information to uncover hidden properties of an environment. This challenge arises in many practical domains, including clinical diagnostics, recommender systems, and educational testing, where data collection is costly and the effectiveness of exploration is critical. Efficient exploration in these contexts strongly depends on exploiting underlying structural relationships within the environment. For instance, recognizing that multiple medical tests may provide overlapping information can reduce the number of tests required to make a diagnosis. Existing exploration approaches drawn from reinforcement learning and active hypothesis testing typically rely on heuristic strategies that require explicit prior assumptions about such structural information. However, when this information is unknown, heuristic methods often lead to redundant exploration, significantly limiting their practical utility in high-stakes domains. Furthermore, these existing approaches do not leverage past experience to improve their exploration efficiency over time. To overcome these limitations, we introduce In-Context Pure Exploration (ICPE), a novel meta-learning framework capable of autonomously discovering and exploiting latent environmental structures across related tasks to guide efficient exploration. ICPE leverages the in-context learning and sequence-modeling capabilities of transformers, combined with supervised learning and deep reinforcement learning techniques to learn exploration strategies directly from experience. Through extensive experiments on synthetic and semi-synthetic exploration tasks, we demonstrate that ICPE is able to efficiently explore in deterministic, stochastic and highly structured environments without relying on any explicit inductive biases. Our results highlight the potential of ICPE to enable more practical exploration strategies suitable for real-world decision-making contexts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments</title>
<link href="https://hdl.handle.net/1721.1/162960" rel="alternate"/>
<author>
<name>Thirumalai, Vittal</name>
</author>
<id>https://hdl.handle.net/1721.1/162960</id>
<updated>2025-10-07T04:13:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">WhatWhen2Ask: Cost-Aware LLM Querying for Autonomous Robots in Uncertain Environments
Thirumalai, Vittal
Autonomous agents operating in real-world environments must make decisions under uncertainty, facing challenges such as partial observability, sparse rewards, and long-horizon planning. While reinforcement learning (RL) enables agents to learn from experience, standard policies often struggle to generalize in the presence of ambiguous tasks or incomplete information. Large language models (LLMs) can provide valuable semantic guidance, but their high computational cost and latency make constant querying impractical. This thesis introduces WhatWhen2Ask, a framework for cost-aware, confidence-driven querying of external multimodal large language models (MLLMs). The agent employs a Deep Q-Network (DQN) as its internal action planner, selectively querying open- and closed-source models (BLIP-2 and GPT-4o) in a hierarchical manner when its confidence is low and external guidance is likely to improve performance. Accepted hints are embedded and fused with structured state representations, supported by tailored reward shaping for improved learning in sparse environments. Evaluated in the HomeGrid environment, WhatWhen2Ask improves the success rate from 38% (DQN-only) to 54%, while querying in fewer than 6% of steps. Ablation studies show that semantic hints, confidence-based querying, selective hint filtering, and hierarchical fallback each contribute meaningfully to performance. These results suggest that principled, confidence-aware LLM querying can enhance decision-making in uncertain environments, offering a step toward more efficient and cost-aware language-augmented agents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach</title>
<link href="https://hdl.handle.net/1721.1/162958" rel="alternate"/>
<author>
<name>Liu, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/162958</id>
<updated>2025-10-07T04:13:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Detecting Errors in Financial Data: A Multi-Agent LLM and Synthetic Data Approach
Liu, Katherine
With the high volume of activity flowing through financial institutions, detecting potential errors remains a critical challenge. This paper addresses two key areas where errors may occur: business name registrations and transactions within valid accounts. Traditional string-matching methods struggle to accurately identify incorrectly written business names that closely resemble existing ones, while existing error detection models for transaction data often suffer from class imbalance, leading to reduced performance on minority incorrect transaction cases. To address these issues, this paper proposes two novel approaches. First, a hybrid method integrating multi-agent Large Language Models (LLMs) with existing string-matching techniques enhances the detection of incorrect business names by capturing subtle variations beyond conventional edit-distance metrics, improving the recall from 0.815 for the baseline model to 0.987 using the proposed method. Second, an improved tabular data generation method for credit card transactions is introduced, leveraging LLMs and class balancing to generate high-quality synthetic data. Using this data to train error detection systems results in a decrease of the false negative rate from 23.47% to 12.84%. Together, these methods enhance the performance of error detection systems, enabling financial institutions to enhance the experiences of their clients.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction</title>
<link href="https://hdl.handle.net/1721.1/162957" rel="alternate"/>
<author>
<name>Su, Arnold C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162957</id>
<updated>2025-10-07T04:14:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Switching State Space Modeling via Constrained Inference&#13;
for Clinical Outcome Prediction
Su, Arnold C.
In clinical settings, timely and accurate prediction of adverse patient outcomes can help guide treatment decisions. While deep learning models such as LSTMs have demonstrated strong predictive performance on multivariate clinical time series, they often lack interpretability. To address this gap, this thesis proposes a framework that combines the predictive strength of neural networks with the interpretability of latent variable models. Specifically, we develop a constrained inference approach to train a switching state space model—an autoregressive hidden Markov model (AR-HMM)—for outcome prediction. Our method leverages knowledge distillation: a high-capacity LSTM "teacher" model is first trained to predict a target clinical outcome of interest, and its predictive behavior is then transferred to an interpretable AR-HMM "student" model through a similarity constraint during inference. We implement a constrained variational inference approach to estimate the parameters of the student model while aligning its latent representations with that of the teacher model’s. We evaluated our approach using two real-world clinical datasets. Our approach demonstrates predictive performance comparable to state-of-the-art deep learning models, while producing interpretable latent trajectories that reflect clinically meaningful patient states.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Duality, Weight Decay, and Metrized Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/162956" rel="alternate"/>
<author>
<name>Newhouse, Laker</name>
</author>
<id>https://hdl.handle.net/1721.1/162956</id>
<updated>2025-10-07T04:14:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Duality, Weight Decay, and Metrized Deep Learning
Newhouse, Laker
The Muon optimizer has shown convincing evidence that it is faster and more scalable than AdamW for deep learning training, setting speed records for training NanoGPT and scaling up to models with 16B parameters. The theory that led to Muon is called metrized deep learning, a method that suggests assigning norms to each part of a neural network. Chapter 1 begins with an accessible explanation of metrized deep learning, including one of its recurring tools: odd polynomial iterations that act directly on singular values. Chapter 2 reviews duality, a way to modify the gradient that seeks to decrease the loss the most while disturbing the model the least. Pedagogically, duality links four popular optimizers—SGD, Adam, Shampoo, and Muon—under a common framework, steepest descent under a norm. Practically, experiments suggest that duality-based optimizers train faster than AdamW and transfer learning rate across width. Chapter 3 develops tools to enforce weight norm constraints during training, conferring provable and upfront Lipschitz guarantees for transformers. We find that optimizer dynamics matter: switching from AdamW to Muon improves standard weight regularization methods—weight decay and spectral normalization—allowing models to reach equal performance with a lower Lipschitz bound. Leveraging that Muon’s update has a fixed spectral norm, we co-design a weight constraint method called spectral cap that improves the Lipschitz vs. performance tradeoff for MLPs and 2M parameter transformers. Our 4-Lipschitz transformer on Shakespeare text reaches validation accuracy 60%. Scaling to 145M parameters, our 600-Lipschitz transformer reaches 21% accuracy on internet text. However, to match the NanoGPT baseline validation accuracy of 39.4%, our Lipschitz upper bound increases to 10^274. Nonetheless, our Lipschitz transformers train without stability measures such as layer norm, QK norm, and tanh logit softcapping.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation</title>
<link href="https://hdl.handle.net/1721.1/162955" rel="alternate"/>
<author>
<name>Zhao, Sarah Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/162955</id>
<updated>2025-10-07T04:13:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Sequence Uncertainty in Comparative Genomics&#13;
with a Probabilistic DNA Representation
Zhao, Sarah Ann
Uncertainty in nucleotide sequences is widespread in bioinformatics, arising from somatic mutations, population-level variation, sequencing errors, and ancestral state inference. Yet, standard formats like FASTA encode DNA deterministically using ASCII string characters, omitting this uncertainty and contributing to pervasive reference biases in genomics. Graph pangenomes have recently emerged to address these limitations by representing genetic variation across populations as bidirected graphs. While promising, these approaches are still developing and are not yet fully integrated with widely used linearly-referenced genomic tools and databases. To bridge this gap, I introduce pDNA (probabilistic DNA), a linearly-referenced data structure that encodes nucleotide-level uncertainty in a vector format compatible with traditional genomics workflows. Each position in a pDNA sequence is represented as a 4-dimension probability vector over the four possible DNA nucleotides, inspired by position weight matrices and one-hot encodings. I also introduce pFASTA, a binary file format for efficient storage of pDNA sequences, along with an open-source software package for generating, manipulating, and analyzing these data. This framework enables uncertainty-aware sequence analysis while maintaining compatibility with existing genomics infrastructure. I apply this framework to ancestral sequence reconstruction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Acquisition of Simulatable Rigid Object Models</title>
<link href="https://hdl.handle.net/1721.1/162954" rel="alternate"/>
<author>
<name>Yang, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/162954</id>
<updated>2025-10-07T04:14:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Online Acquisition of Simulatable Rigid Object Models
Yang, Ethan
How can we build a robot that operates autonomously in a home environment over long periods of time? A key requirement is the ability to perceive and understand its surroundings, including the objects it will interact with. This thesis investigates how a robot can reconstruct previously unknown objects and integrate them into a physics simulation for planning. We explore two methods for reconstructing the 3D geometry of objects and test their performance in simulation and in real-world experiments. Our results demonstrate that a learned depth model enables 3D reconstruction of unknown objects and their successful integration into simulation environments. Additionally, we investigate methods for estimating an object’s inertial parameters, using its reconstructed mesh and through manipulation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling contrastive learning batch size by two orders of magnitude</title>
<link href="https://hdl.handle.net/1721.1/162953" rel="alternate"/>
<author>
<name>Tian, Betsy</name>
</author>
<id>https://hdl.handle.net/1721.1/162953</id>
<updated>2026-01-16T20:14:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling contrastive learning batch size by two orders of magnitude
Tian, Betsy
Contrastive learning has emerged as a powerful framework for unsupervised representation learning, allowing models to learn by maximizing agreement between related samples and distinguishing dissimilar ones. However, contrastive learning frameworks are fundamentally limited by the number of negative pairs a model can observe, and memory-intensive backbones constrain practical batch sizes. We introduce a three-phase, adapter-augmented training framework that scales contrastive batch sizes by two orders of magnitude – surpassing previous state-of-the-art learners in both accuracy and speed. First, we co-train the backbone and adapter on small batches to establish a strong initialization. Next, we freeze the backbone and train the adapter alone with very large batches, exposing it to an enlarged negative pool. Finally, we transfer large-batch adapter gradients back into the backbone via segmented backpropagation. We evaluate our method on the PlacesAudio dataset and show promising results for boosting retrieval performance at each phase. By exposing the model to substantially more negatives per effective batch, we achieve higher accuracy at a faster speed than optimizer-stepping baselines. Ultimately, this approach that scales batch size by hundreds of times can be integrated into any contrastive learning framework for more robust representation learning and abundant negative sampling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography</title>
<link href="https://hdl.handle.net/1721.1/162952" rel="alternate"/>
<author>
<name>Rubel, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/162952</id>
<updated>2025-10-07T04:14:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Fully Automated Volumetric Analysis of Lung Nodules in Computed Tomography
Rubel, Evan
Early detection of lung cancer significantly improves patient outcomes, and tracking the growth of lung nodules over time is key to understanding their progression and informing future treatment decisions. However, calculating nodule growth in computed tomography (CT) scans remains a highly manual and time-consuming task. In this work, we develop an automated end-to-end pipeline to compute lung nodule growth using state-of-the-art computer vision techniques. While modern advances in deep learning have all but solved many learning tasks in the domain of natural images, biomedical imaging presents unique challenges due to limited data availability, inconsistent annotations, and deployment constraints. We address these challenges by training robust detection and segmentation models using the LUNA16 and LNDb datasets. On the held-out UniToChest dataset, our methods generalize well, attaining a nodule recall of 77.49%, reducing false positives per scan by a factor of 11.3 compared to existing techniques, and achieving a mean nodule-wise Dice score of 0.6453. We then apply our methods to analyze nodule growth in 1,378 patients from the National Lung Screening Trial; we estimate a median nodule volume-doubling time of 791.23 days across all nodules from the patients that do not receive a cancer diagnosis and a median nodule volume-doubling time of 637.38 days across all nodules from the patients that do receive a cancer diagnosis. We also recall 82.20% of radiologist-annotated nodules that are directly associated with a cancer diagnosis and estimate a shorter median nodule volume-doubling time of 370.11 days for these nodules. By automating lung nodule growth quantification, this work lays the foundation for improved screening protocols, personalized treatment planning, and the development of novel imaging biomarkers. To encourage further work in this area, we release our full software pipeline at https://github.com/evanrubel/nodule_volumes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator</title>
<link href="https://hdl.handle.net/1721.1/162951" rel="alternate"/>
<author>
<name>Louie, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/162951</id>
<updated>2025-10-07T04:13:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Analysis of a 80 GHz Hybrid CMOS Dielectric Resonator Oscillator
Louie, Tiffany
This work studies a high frequency, low phase noise, hybrid CMOS oscillator based on a cylindrical dielectric resonator coupled directly to an on chip structure. Dielectric resonators (DR) are known for their high quality factor, low cost, and high temperature stability which makes them a desirable frequency selecting element in design for millimeter-wave (mmWave) applications. Current dielectric resonator oscillators (DRO) have proven to be phase stable, but are limited in frequency (&lt; 40Ghz) due to their implementation with discrete components. However, in increasing the operational frequency up to the GHz range, it is possible to reduce size of the DR and place it directly on top of a cmos chip. We demonstrate, using a 22nm FD-SOI process, the design of a 80Ghz DRO with an area of 4mm² and an oscillator power consumption of 1.95mW. The DRO achieves a simulated phase noise of -128 dBc/Hz at 1MHz and -148 dBc/Hz at 10MHz.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LEO: an LLM-Powered EDA Overview</title>
<link href="https://hdl.handle.net/1721.1/162950" rel="alternate"/>
<author>
<name>Zheng, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/162950</id>
<updated>2025-10-07T04:13:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">LEO: an LLM-Powered EDA Overview
Zheng, Sophia
Computational notebooks impose a linear structure that impedes data analysts’ sensemaking process with overwritten cells, dead-end code, and fragmented logic. This challenge is especially pronounced when analysts either encounter a notebook authored by someone else or revisit a self-authored notebook after significant time has passed. In both cases, understanding the analysis code becomes convoluted and laborious. To address these barriers, we introduce LEO, a computational notebook tool that operationalizes notebook summarization by leveraging large language models to (1) cluster analysis patterns and (2) trace variable use. LEO organizes code into a two-level hierarchy–General Level Sections and Code Level Actions—integrated with in-line textual summaries filtered on the variable-level, further supporting task-driven exploration. We evaluate the system’s effectiveness in a user study with five computational notebook users across two realistic use cases. Participants reported that LEO streamlined code comprehension and navigation of undocumented notebooks by allowing them to query variables and traverse code cells with greater ease.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Articulated 3D Scene Graphs from Egocentric Vision</title>
<link href="https://hdl.handle.net/1721.1/162949" rel="alternate"/>
<author>
<name>Yu, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/162949</id>
<updated>2025-10-07T04:13:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Articulated 3D Scene Graphs from Egocentric Vision
Yu, Alan
Robotic mapping systems typically approach building metric-semantic scene representations from the robot’s own sensors and cameras. However, these “first person” maps inherit the robot’s own limitations due to its embodiment or skillset, which may leave many aspects of the environment unexplored. For example, the robot might not be able to open drawers or access wall cabinets. In this sense, the scene graph is not as complete, and requires a more capable robot to fill in the gaps by remapping. We narrow these blind spots in current methods by leveraging egocentric data captured as a human naturally explores a scene wearing Project Aria glasses, giving a way to directly transfer knowledge about articulation from the human to any deployable robot. We demonstrate that, by using simple heuristics, we can leverage egocentric data to recover models of articulate object parts, with quality comparable to those of state-of-the-art methods based on other input modalities. We also show how to integrate these models into 3D scene graph representations, leading to a better understanding of object dynamics and object-container relationships. We finally demonstrate that these articulated 3D scene graphs enhance a robot’s ability to perform mobile manipulation tasks, showcasing an application where a Boston Dynamics Spot is tasked with retrieving concealed target items, given only the 3D scene graph as input.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/162948" rel="alternate"/>
<author>
<name>Strømstad, Filip Traasdahl</name>
</author>
<id>https://hdl.handle.net/1721.1/162948</id>
<updated>2025-10-07T04:13:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralized Declustering of Multiple Underactuated Autonomous Surface Vehicles
Strømstad, Filip Traasdahl
Multi-agent systems have seen a significant rise in research interest, enabled by the increasing availability of low-cost autonomous platforms and motivated by a wide range of emerging applications. However, the coordinated deployment of large numbers of autonomous vehicles in marine environments remains a nontrivial and high-risk problem, yet it is often overlooked in the literature. These vehicles are typically deployed from a single location, and their underactuated nature, close proximity, and susceptibility to external disturbances make it difficult to achieve a mission-ready configuration without collisions. In this thesis, we address the problem of transitioning a set of underactuated Autonomous Surface Vehicles (ASVs) from arbitrary and inconvenient initial conditions to a deconflicted set of deployed vehicles. We propose a decentralized and scalable method that calculates and assigns target positions to the vehicles, generates optimal paths that comply with minimum turning radius constraints, and ensures collision avoidance between the vehicles through a shared speed policy. Contributions also include a formal definition and quantification of clustering and declustering in multi-agent systems. The approach is implemented using the MOOS-IvP autonomy framework, and performance is evaluated through simulation with up to \(64\) vehicles and extensive field trials with eight vehicles. Results demonstrate that our approach reduces the time to decluster for the most challenging initial conditions by 50% compared to the current manual method. By improving efficiency and robustness while eliminating human involvement, this work streamlines ASV fleet deployments, enabling more scalable multi-agent field operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DBOS Advanced Network Analysis Capability for Collaborative Awareness</title>
<link href="https://hdl.handle.net/1721.1/162947" rel="alternate"/>
<author>
<name>Lockton, Sophia E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162947</id>
<updated>2025-10-07T04:13:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">DBOS Advanced Network Analysis Capability for Collaborative Awareness
Lockton, Sophia E.
Collaborative cyber defense is an essential strategy for detecting and mitigating cyber threats [1]. As traditional intrusion detection systems struggle against increasingly sophisticated attacks, we propose embedding collaborative cyber defense directly into system infrastructure. This work presents a novel implementation of collaborative awareness within DBOS (a Database-Oriented Operating System), resulting in a platform that significantly accelerates application development while providing built-in security for transactional web services. By treating security as a first-class operating system service, our approach facilitates real-time comprehensive network observation and analysis without the need for external tools. The implementation supports the construction, aggregation, and analysis of traffic matrices using both Python and PostgreSQL-based workflows. These workflows extract and process IP-level metadata from DBOS applications, enabling multi-instance aggregation and analysis of network data. This integration represents the first instance of collaborative network analysis within an operating system runtime, demonstrating that secure-by-default infrastructure is both feasible and performant.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minding the Politeness Gap in Cross-cultural Communication</title>
<link href="https://hdl.handle.net/1721.1/162946" rel="alternate"/>
<author>
<name>Machino, Yuka</name>
</author>
<id>https://hdl.handle.net/1721.1/162946</id>
<updated>2025-10-07T04:13:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Minding the Politeness Gap in Cross-cultural Communication
Machino, Yuka
Misunderstandings in cross-cultural communication often arise from subtle differences in interpretation, but it is unclear whether these differences arise from the literal meanings assigned to words or from more general pragmatic factors such as norms around politeness and brevity. In this paper, we report three experiments examining how speakers of British and American English interpret intensifiers like “quite” and “very,” finding support for a combination of semantic and pragmatic factors. To better understand these differences, we developed a computational cognitive model where listeners recursively reason about speakers who balance informativity, politeness, and utterance cost. A series of model comparisons suggest that cross-cultural differences in intensifier interpretation stem from (1) different literal meanings, (2) different weights on utterance cost. These findings challenge accounts based purely on semantic variation or politeness norms, demonstrating that cross-cultural differences in interpretation emerge from an intricate interplay between the two.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting</title>
<link href="https://hdl.handle.net/1721.1/162945" rel="alternate"/>
<author>
<name>Senthil, Swathi</name>
</author>
<id>https://hdl.handle.net/1721.1/162945</id>
<updated>2025-10-07T04:12:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Empirical Analysis of Neural Architectures and Side&#13;
Information in Financial Time Series Forecasting
Senthil, Swathi
This thesis investigates the predictive capabilities of neural networks in financial time series forecasting, focusing on predicting the weekly close price of the SPY index. We explore the integration of options-derived features alongside traditional price data, compare recurrent architectures and transformer-based models, and evaluate multiple training strategies. Our key contributions include: (1) evidence that options-derived input features improve both error metrics and directional accuracy; (2) a comparison study of four training methods (one-step-ahead, direct multi-step, simulation error, and teacher-forcing); (3) the development of a bidirectional GRU-LSTM hybrid model that outperforms standard recurrent networks in multi-step forecasting; and (4) a novel coarse tokenization approach for discretizing continuous financial data, which improves first-week prediction performance when used in transformer models that use an asymmetric attention mechanism. Overall, this thesis illustrates the importance of input design, model architecture, and training methodology in neural financial forecasting. We conclude by outlining directions for future work, including cross-asset generalization and further exploration of tokenization schemes for transformer-based models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating LLM Hallucination in the Banking Domain</title>
<link href="https://hdl.handle.net/1721.1/162944" rel="alternate"/>
<author>
<name>Sert, Deniz Bilge</name>
</author>
<id>https://hdl.handle.net/1721.1/162944</id>
<updated>2025-10-07T04:13:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mitigating LLM Hallucination in the Banking Domain
Sert, Deniz Bilge
Large Language Models (LLMs) offer significant potential in the banking sector, particularly for applications such as fraud detection, credit approval, and enhancing customer experience. However, their tendency to "hallucinate"—generating plausible but inaccurate information—poses a critical challenge. This thesis examines existing strategies for mitigating LLM hallucinations and proposes a novel approach to reduce hallucinations in the context of predicting customer churn using LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Layered Unlearning for Adversarial Relearning</title>
<link href="https://hdl.handle.net/1721.1/162943" rel="alternate"/>
<author>
<name>Qian, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/162943</id>
<updated>2025-10-07T04:13:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Layered Unlearning for Adversarial Relearning
Qian, Timothy
Our goal is to understand how post-training methods, such as fine-tuning, alignment, and unlearning, modify language model behavior and representations. We are particularly interested in the brittle nature of these modifications that makes them easy to bypass through prompt engineering or relearning. Recent results suggest that post-training induces shallow contextdependent “circuits” that suppress specific response patterns. This could be one explanation for the brittleness of post-training. To test this hypothesis, we design an unlearning algorithm, Layered Unlearning (LU), that creates distinct inhibitory mechanisms for a growing subset of the data. By unlearning the first &#119894; folds while retaining the remaining &#119896; − &#119894; at the &#119894;th of &#119896; stages, LU limits the ability of relearning on a subset of data to recover the full dataset. We evaluate LU through a combination of synthetic and large language model (LLM) experiments. We find that LU improves robustness to adversarial relearning for several different unlearning methods. Our results contribute to the state-of-the-art of machine unlearning and provide insight into the effect of post-training updates.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks</title>
<link href="https://hdl.handle.net/1721.1/162942" rel="alternate"/>
<author>
<name>Qian, Janet</name>
</author>
<id>https://hdl.handle.net/1721.1/162942</id>
<updated>2025-10-07T04:13:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mixed-Variable Bayesian Optimization using Prior-Data&#13;
Fitted Networks
Qian, Janet
Bayesian optimization (BO) is a powerful framework for optimizing expensive blackbox functions, widely used in domains such as materials science, engineering design, and hyperparameter tuning. Traditional BO relies on Gaussian processes (GPs) as surrogate models, but GPs face limitations in flexibility and scalability. Prior-Data Fitted Networks (PFNs) have recently emerged as a promising alternative, leveraging transformer architectures and in-context learning to approximate posterior predictive distributions (PPDs) in a single forward pass. By training on large amounts of synthetically generated data from sample-able function priors, PFNs can learn to rapidly predict PPDs across a wide range of function classes. In this thesis, we investigate the application of PFNs to mixed-variable BO, a particularly challenging setting due to the interplay between continuous and discrete inputs and the combinatorial complexity of the search space. We evaluate how PFNs perform when integrated with a range of mixed-variable BO strategies, including various encoding schemes and discrete-aware acquisition optimization. Additionally, we explore how finetuning PFNs on targeted function priors can enhance performance when prior knowledge about the objective is available. Our contributions include empirical evaluations of mixed-BO techniques, insights into PFN training, and a suite of mixed-variable benchmark problems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering</title>
<link href="https://hdl.handle.net/1721.1/162941" rel="alternate"/>
<author>
<name>Ravuri, Chaitanya</name>
</author>
<id>https://hdl.handle.net/1721.1/162941</id>
<updated>2025-10-07T04:13:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Eliminating Hallucination-Induced Errors in Code&#13;
Generation with Functional Clustering
Ravuri, Chaitanya
Modern code–generation LLMs can already solve a large fraction of programming problems, yet they still hallucinate subtle bugs that make their outputs unsafe for autonomous deployment. We present functional clustering, a black-box wrapper that eliminates nearly all hallucination-induced errors while providing a tunable confidence score. The wrapper samples many candidate programs, executes each on a self-generated test suite, and clusters candidates whose I/O behavior is identical; the empirical mass of the largest cluster serves as an exact confidence estimate. A single scalar threshold on this estimate lets users trade coverage for reliability with exponential guarantees. On LiveCodeBench our verifier preserves baseline pass@1 on solvable tasks yet slashes the error rate of returned answers from ∼65% to 2%, and drives it to 0% at a conservative threshold while still answering 15.6% of prompts. Manual audits show that the few residual mistakes stem from prompt misinterpretation, not random generation noise, narrowing future work to specification clarity. Because the method requires only sampling and sandbox execution, it applies unchanged to closed-source APIs and future models, offering a practical path toward dependable, autonomous code generation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Choosing Networks for Ride-Hailing Platforms</title>
<link href="https://hdl.handle.net/1721.1/162940" rel="alternate"/>
<author>
<name>Somsirivattana, Thana</name>
</author>
<id>https://hdl.handle.net/1721.1/162940</id>
<updated>2025-10-07T04:13:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Choosing Networks for Ride-Hailing Platforms
Somsirivattana, Thana
The development of autonomous vehicles is poised to reshape the landscape of transportation. As companies prepare to deploy these vehicles on ride-hailing platforms, a key operational challenge is determining the networks on which to train the vehicles. Our work contributes toward addressing this challenge on three fronts. First, we develop a theoretical model of the network selection problem and prove theoretical results that show the importance of two parameters: the detour factor and the fleet size. Second, we develop several approaches for selecting the networks. Third, we evaluate these approaches on empirical data. We find empirical support for the importance of the detour factor and the fleet size.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PyGridSim: A Functional Interface for Distributed System Simulation</title>
<link href="https://hdl.handle.net/1721.1/162939" rel="alternate"/>
<author>
<name>Zhao, Angela M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162939</id>
<updated>2025-12-11T16:38:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">PyGridSim: A Functional Interface for Distributed System Simulation
Zhao, Angela M.
This thesis details the development of PyGridSim, an open source python module that leverages OpenDSS capabilities to provide an efficient and scalable functional interface for building distributed system simulations. Distributed power systems encompass all components that power an electrical system— from larger power plants to microgrids—and represent the network of electric consumption and production in a system. Simulations of such power systems allow experts to analyze potential faults and risks in a fast, reproducable, and cost-efficient way. Thus, the accessibility of such simulations is critical to supporting the safety and reliability of power systems. While existing packages built for distributed system simulation provide the necessary computing power and customizability of a distributed system simulator, their interfaces are hard to scale over many nodes and often have difficult-to-learn syntax. PyGridSim aims to build on these existing modules—maintaining customizability while providing a flexible, intuitive, and scalable syntax structure.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving the Programmability of A Distributed Hardware Accelerator</title>
<link href="https://hdl.handle.net/1721.1/162938" rel="alternate"/>
<author>
<name>Shwatal, Nathan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162938</id>
<updated>2025-10-07T04:13:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving the Programmability of A Distributed Hardware Accelerator
Shwatal, Nathan A.
Sparse iterative matrix algorithms are critical to many scientific and engineering workloads, yet they perform poorly on conventional hardware. (Ōmeteōtl, a new hardware accelerator with a distributed-memory and task-based execution model, aims to address these performance bottlenecks. However, programming for (Ōmeteōtl is low-level, error-prone, and far removed from the simplicity of typical iterative formulations. This thesis presents Lapis, a domain-specific language and compiler that allows users to express sparse matrix algorithms in high-level Python code and automatically generates efficient C++ code for (Ōmeteōtl. Lapis abstracts away data partitioning and task orchestration, reducing implementation complexity: for example, it lowers lines of code by 30× for conjugate gradients and 46× for power iteration. Despite this abstraction, generated code achieves 75.7% to 92.6% of the performance of manually written implementations across several benchmarks.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow</title>
<link href="https://hdl.handle.net/1721.1/162937" rel="alternate"/>
<author>
<name>Mao, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/162937</id>
<updated>2025-10-07T04:12:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Study of Thermochemical Non-equilibrium and Sensor&#13;
Cavity Geometry in Hypersonic Flow
Mao, Grace
This work presents a computational investigation of the influence of geometric configurations within a hypersonic flow field on optical distortion, with a particular focus on the effects of window deformation and the role of thermochemical modeling compared to perfect gas assumptions. Turbulent RANS and conjugate heat transfer were used to model three 3D geometries in US3D, an unstructured-grid finite volume computational fluid dynamics (CFD) solver. The three investigated geometries are a flat plate with a flush-mounted sensor, an open cavity with a length-to-depth ratio of 2, and a closed cavity with a length-to-depth ratio of 16. The data demonstrate that the flat plate configuration has the best optical performance and that the closed cavity has the worst. Additionally, the inclusion of thermochemistry in the flow simulation results in a more pessimistic outlook on image quality compared to the perfect gas model. The results document optical distortion for several different geometries with and without thermochemical modeling within hypersonic flow that can inform future design decisions and research.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs</title>
<link href="https://hdl.handle.net/1721.1/162936" rel="alternate"/>
<author>
<name>Wang, Shih-Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/162936</id>
<updated>2025-10-07T04:13:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">BlueVeri: Formal Security Verification for Bluespec&#13;
Processor Designs
Wang, Shih-Yu
There are numerous hardware security defense mechanisms designed to mitigate sidechannel attacks. However, ensuring that a defense can comprehensively protect against an entire class of attacks, while avoiding the introduction of new vulnerabilities that could lead to additional attack surfaces, remains a significant challenge. Although researchers have attempted to apply formal verification techniques to hardware security, these efforts have been hindered by scalability issues. In this paper, we introduce BlueVeri, a systematic and automatable approach for formally verifying the security of a Bluespec processor against speculative execution attacks. BlueVeri leverages the high-level information provided by Bluespec’s guarded atomic actions, simplifying and accelerating the verification process. We evaluate BlueVeri on out-of-order processors implemented in Bluespec, demonstrating that our approach substantially enhances verification scalability and is capable of proving the security properties of a minimal out-of-order processor within one hour.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Churn Prediction and Infrastructure Resilience</title>
<link href="https://hdl.handle.net/1721.1/162934" rel="alternate"/>
<author>
<name>Agrawal, Shreeansh</name>
</author>
<id>https://hdl.handle.net/1721.1/162934</id>
<updated>2025-10-07T04:12:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Churn Prediction and Infrastructure Resilience
Agrawal, Shreeansh
This thesis investigates how advanced machine learning methods can effectively address two critical business challenges facing the telecommunications industry: short-term customer churn prediction and long-term infrastructure resilience to climate-driven disruptions.&#13;
&#13;
In the first part of this work, I develop an upgrades-informed churn forecasting model tailored specifically for marketing operations. Recognizing limitations in the existing aggregate forecasting methodologies, I create a cohort-based cascade model that explicitly integrates customer upgrade behavior across various contract tenures. To address data sparsity and longitudinal gaps in newer contract types, I employ synthetic data generation and imputation techniques, such as regression-based methods and Multivariate Imputation by Chained Equations (MICE). For forecasting churn and upgrade rates, I prioritize interpretability by applying linear regression enhanced with time-series forecasting techniques and macroeconomic indicators, including the Consumer Price Index. This approach significantly improves forecasting accuracy, aligns internal stakeholder objectives, and supports strategic decision-making around customer retention and promotional offers.&#13;
&#13;
The second part focuses on building predictive models and strategic frameworks for long-term infrastructure resilience in the face of increasing climate risks. Leveraging spatial-temporal clustering methods (DBSCAN) and advanced neural network architectures, I develop a model to attribute historical outages to extreme weather events. Further, I integrate this model with future climate scenarios from CMIP5 projections using Monte Carlo simulations, providing actionable insights into future infrastructure vulnerabilities. Employing SHapley Additive exPlanations (SHAP), I interpret model predictions, highlighting critical factors such as precipitation, windspeed, and atmospheric pressure. Additionally, I propose frameworks for quantifying financial impacts of future outages and recommend optimization strategies for proactive infrastructure hardening and emergency response.&#13;
&#13;
Collectively, these applications demonstrate the value of strategically employing interpretable and robust machine learning methodologies to enhance short-term operational decisions and long-term strategic planning within telecom organizations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation</title>
<link href="https://hdl.handle.net/1721.1/162933" rel="alternate"/>
<author>
<name>Shafferman, Hannah R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162933</id>
<updated>2025-10-07T04:13:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Segmentation Based Tracking for Aerial Robot Global&#13;
Localization in Unstructured Environments with Oblique&#13;
Monocular Camera Orientation
Shafferman, Hannah R.
In the field of robotics, there has been a growing interest in multi-robot systems and their potential to improve the efficiency, scale, and reliability of tasks beyond what an individual robot can achieve. Global localization is a crucial task for autonomous robot navigation, specifically in the multi-agent scenario where robots need to localize within maps communicated by other agents. The scenario where vehicles are viewing their environments from the same perspective, or camera viewpoint, is well studied. However, when environments are mapped from different camera viewing angles, traditional methods fail to match visual features and thus fail to localize. The technical gap that this thesis addresses is when autonomous vehicles within a team are mapping the same environment from different viewpoints, specifically nadir and an oblique camera orientations in an unstructured environment. Many existing visual place recognition (VPR) methods fail to match visual features that look visually different due to appearance, illumination, or viewpoint changes and thus fail to localize. In this thesis, we demonstrate the shortcomings of previous work to generalize to an off-nadir camera angle and explore the benefits and challenges that arise with utilizing oblique imagery for visual feature detection and tracking. We propose a segmentation-based object tracking pipeline to improve tracking and environment mapping performance in this traditionally challenging scenario. Our approach consists of 1) a front-end auto-segmentation tracking pipeline followed by 2) a submap correspondence search, which exploits geometric consistencies between environment maps to align vehicle reference frames. We evaluate our approach on a challenging indoor, cluttered dataset and demonstrate a maximum precision 74% higher than traditional and learning-based baseline methods, with a map size 0.5% the size of the most memory conservative traditional baseline method.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/162932" rel="alternate"/>
<author>
<name>Sonandres, Kyle A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162932</id>
<updated>2025-10-07T04:13:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Aerocapture Guidance and Estimation Framework for Improved Robustness to Uncertainty
Sonandres, Kyle A.
Aerocapture is an orbital insertion maneuver that converts a hyperbolic approach trajectory into a desired captured orbit using the aerodynamic forces generated during a single atmospheric pass. While it offers major benefits, such as reduced interplanetary cruise time and lower propellant mass reserves, it also introduces significant risk due to extreme sensitivity to atmospheric and delivery state uncertainties. This drives the need for robust guidance algorithms and accurate environmental estimation techniques. This thesis presents approaches to address both of these needs, developing solutions to improve aerocapture performance and robustness to uncertainty. The first contribution is the development of ABAMGuid+, a novel aerocapture guidance algorithm that leverages simultaneous control over bank angle and angle of attack. Inspired by optimal control theory, the algorithm uses a four-phase structure to mimic the optimal control laws while maintaining tractability for online use. Optimal control theory is utilized to identify the optimal control solutions, and numerical optimization is used to validate the analytic solutions prior to integration into a guidance algorithm. Extensive simulation results of a Uranus aerocapture scenario, including over 140,000 Monte Carlo trajectories, demonstrate significant improvements in capture success rates and propellant efficiency compared to existing methods. The second contribution addresses environmental uncertainty directly by developing a deep learning-based approach to estimate the atmospheric density profile during flight. A long short-term memory (LSTM) neural network-based architecture is trained to predict atmospheric density given sequences of flight data. The trained model is integrated into the guidance loop and a curriculum learning process is used to refine in-flight performance. Monte Carlo results show that the LSTM-augmented guidance system reduces propellant usage compared to traditional estimation methods. In summary, this thesis presents two approaches that improve aerocapture performance and robustness to uncertainty. We show that this added robustness can be achieved both by expanding algorithmic ability and by improving environmental estimation approaches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations</title>
<link href="https://hdl.handle.net/1721.1/162931" rel="alternate"/>
<author>
<name>McGee, Carissma</name>
</author>
<id>https://hdl.handle.net/1721.1/162931</id>
<updated>2025-12-10T00:31:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mass and Distance Estimation Simulations for the Nancy Grace Roman Space Telescope Using PyLIMASS and a Case Study on Intellectual Property Frameworks in Space Collaborations
McGee, Carissma
Gravitational microlensing is a phenomenon in which a foreground star or planet briefly magnifies light from a more distant background star. This effect enables the discovery of exoplanets that are otherwise undetectable, including those orbiting faint hosts and at large separations. Microlensing is well suited to characterizing exoplanets beyond the snow line, revealing mass ratios and orbital geometries inaccessible to transit or radial velocity methods. The Nancy Grace Roman Space Telescope will carry out the Galactic Exoplanet Survey to detect thousands of microlensing events with the cadence and precision necessary for statistical exoplanet population studies. To verify Roman’s ability to meet its core science requirement, recovering the lens mass and distance in at least 40% of planetary events with better than 20% uncertainty, targeted simulations are essential. Using the pyLIMASS inference framework and Fisher matrix-based uncertainty propagation, I demonstrate that for the well-characterized event OGLE-2013-BLG-0132Lb, the lens mass can be constrained to within 18.7% uncertainty, validating the feasibility of Roman’s requirement on a case-study basis. This thesis also addresses the legal and policy foundations needed to ensure global access to these simulation tools. By advancing open-source software models and proposing a space IP framework for equitable knowledge sharing, it supports collaborative scientific infrastructure for future international space missions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts</title>
<link href="https://hdl.handle.net/1721.1/162930" rel="alternate"/>
<author>
<name>Mueller, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/162930</id>
<updated>2025-10-07T04:13:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Combined Steam Power Cycle and Turbofan Engine&#13;
for Improvement in Aviation Climate Impacts
Mueller, Anna
Despite significant innovations in aviation technology over the last 70 years resulting in enormous efficiency improvement, the rising demand for air travel means that aviation carbon emissions continue to increase each year. The rate of improvement to aircraft propulsion engines is diminishing and additional improvements often add significant engine cost or weight. With the goal of reducing aviation’s contribution to global climate change, future aircraft engine designers must consider concepts that stray from the traditional turbofan engine. In this thesis, I develop an engine cycle model combining the turbofan engine with a steam power cycle and use the model to explore the benefits of applying this concept to aircraft engines. In order to study the impact to engine performance and emissions from adding a steam cycle, the engine model needs to be capable of representing the water phase changes and the heat exchangers required to drive those phase changes. My contribution is the development of such a model – with special attention to the modeling of water properties and phase change of water – which ties heat exchanger models into an engine thermodynamic model. The engine cycle as well as heat exchanger parameters including water-to-air ratio, combustor exit temperature, overall pressure ratio, and water pressure are varied to explore the impact to overall engine performance, including the impact of the added heat exchanger weight. This thesis covers the development and initial testing of this model, which enables future studies in engines with phase changing heat exchangers or water injection with the goal of assisting the search for the future engine technologies that will reduce harmful impacts of aviation while continuing to allow air travel.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems</title>
<link href="https://hdl.handle.net/1721.1/162929" rel="alternate"/>
<author>
<name>Hoss, Summer A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162929</id>
<updated>2025-10-07T04:13:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Aero-Thermo-Chemo-Mechanical Coupling Framework for the Analysis of Hypersonic Ablative Thermal Protection Systems
Hoss, Summer A.
There are countless challenges associated with the accurate modeling of the hypersonic flight of ablative thermal protection systems (TPS): resolving the relevant coupled physical phenomena through multi-physics simulations, the management of the disparate spatiotemporal scales associated with the fluid and solid responses, and establishing a reliable numerical model able to predict the response of ablative materials exposed to extreme gradients—to name a few. The two-way, loosely coupled framework presented in this thesis consists of ΣMIT, a multi-physics computational solid mechanics (CSM) code, coupled with US3D, a hypersonic computational fluid dynamics (CFD) solver, to form a complete aero-thermochemo-mechanical simulation framework. The ΣMIT-US3D coupling framework provides a step towards high-fidelity simulation capabilities for hypersonic vehicles with ablative TPS, establishing a strong foundation for the simulation of fluid-structure interaction (FSI) phenomena and computation of the mechanical response of porous ablators. The requirement of a robust numerical formulation for the solution of hypersonic pyrolysis problems was made apparent when encountering numerical convergence issues with legacy methods, which sparked the development of a robust semi-implicit pyrolysis material model. The so-called Linearized Pyrolysis model employs simplifying assumptions for the energy and mass balance equations and relies upon the time-lagging of chosen terms to achieve linear convergence and robust performance. The performance of the model has been validated against the Ablation Workshop Test Cases and has increased the range of allowable representative hypersonic boundary conditions significantly compared to the legacy approach. Together, the model and the coupling framework are applied to two aero-thermochemo-mechanical analyses contained within the thesis: a spherical-tipped nose cone and the Orion heat shield. Preliminary results identify the decomposition region as a zone in which high von Mises stress tends to occur—care must be taken to ensure that internal and external flight loads do not exceed allowable limits to prevent catastrophic TPS material failure in this region. However, perhaps the most significant insight resulting from the framework relates to the computation of mass fluxes through the porous ablative material, revealing that for an isotropic monolithic heat shield with at a zero angle of attack, pyrolysis gas flow is driven by the pressure gradient applied to the shield such that the flow exits at the edges of the shield rather than from the base.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aeroverse: Aerospace Education in Extended Reality</title>
<link href="https://hdl.handle.net/1721.1/162928" rel="alternate"/>
<author>
<name>Johnson, Mollie</name>
</author>
<id>https://hdl.handle.net/1721.1/162928</id>
<updated>2025-10-07T04:13:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Aeroverse: Aerospace Education in Extended Reality
Johnson, Mollie
Aerospace education is a continuously evolving field that is increasingly dependent on digital tools. However, it is ambitious to shift the teaching paradigm to accommodate new cutting-edge technologies. Extended reality (XR), which encompasses augmented (AR) and virtual reality (VR), is an example of such technology. In recent years, VR has seen an increase in usage in education as a novel way to provide students with immersive learning experiences, and XR has a long history of use within the working aerospace industry. However, application in the overlap between the two— aerospace engineering education— remains largely unexplored to date. The themes addressed in this thesis are two-fold: first, the goal is to create VR learning modules to supplement the existing aerospace engineering curriculum. Second, the aim is to validate whether VR technology as a teaching medium can improve learning outcomes and student engagement within the MIT AeroAstro department. With these themes in mind, two experiments were conducted to explore this topic. The first experiment presents the design and execution of an experimental course aimed at aerospace engineering students to assess the educational impact of VR. Over the course of this study, ANOVA and Kruskal-Wallis tests found that there was no significant difference (p &gt; 0.05) in performance between the VR and non-VR groups, save for a few exceptional cases. The second experiment details the integration of a single VR module into an existing course in which all students interacted with the VR activity. Students responded positively to this experiment, reporting increased feelings of engagement and a sense that it aligned well with the rest of the course. One-sample Wilcoxon tests reveal that these findings are largely significant (p &lt; 0.05). This thesis advances the work on assessing VR use for aerospace education. The implications of this work may influence the decisions of other educators regarding the adoption of VR technology as supplements to their own teaching methodologies. As a whole, this thesis contributes to the broader conversation on integrating VR into the classroom.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration</title>
<link href="https://hdl.handle.net/1721.1/162927" rel="alternate"/>
<author>
<name>MacRobbie, Madelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/162927</id>
<updated>2025-10-07T04:13:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating the Role of Mission Architecture in Crew Socioemotional Health for Mars Exploration
MacRobbie, Madelyn
Human space exploration is evolving rapidly, with commercial successes and NASA’s Artemis missions driving rapid growth and innovation. Plans for longer, larger, and more complex missions necessitate development of new mission architectures to sustain the crews needed to support these missions. Larger missions and multi-site architectures have become feasible with advances in commercial launch vehicles, and generate increased safety and redundancy for crewed operations. However, crew dynamics in these mission architectures have yet to be investigated. This thesis investigates the role of mission architecture (specifically single-site versus dual-site configurations) in subgroup formation and the resulting impacts to socioemotional well-being. We first develop a systematic approach for optimizing analog mission design, then apply this to design two analog missions to compare the effects of single-site and dual-site mission architectures on crew dynamics and psychosocial health. Results provide valuable insights for future Mars mission design, where crew structure and psychosocial adaptation are critical to mission success.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Strong, Human-Compatible Codenames AI&#13;
Agent</title>
<link href="https://hdl.handle.net/1721.1/162926" rel="alternate"/>
<author>
<name>Zhu, Sebastian</name>
</author>
<id>https://hdl.handle.net/1721.1/162926</id>
<updated>2025-10-07T04:13:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards a Strong, Human-Compatible Codenames AI&#13;
Agent
Zhu, Sebastian
Current language models are limited in their ability to solve complex planning and reasoning problems without the aid of search procedures. While a large body of work has developed search procedures tailored to single-turn, single-user natural language interactions, language generation in multi-agent contexts involving multiple users, imperfect information, and partially misaligned objectives remains extremely challenging. We aim to build search procedures that will enable language models to assist with interactive, multi-agent decision-making in a diverse range of contexts. Using the word game Codenames as a benchmark, we will combine game-theoretic planning procedures with basic language model-based scoring methods to create agents that both play strong policies and play well with human policies. This work yields a set of practical text generation procedures, new evaluation benchmarks, and foundational algorithmic improvements in language model search.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation into Contrail Observability from Different Satellite Platforms</title>
<link href="https://hdl.handle.net/1721.1/162925" rel="alternate"/>
<author>
<name>Euchenhofer, Marlene V.</name>
</author>
<id>https://hdl.handle.net/1721.1/162925</id>
<updated>2025-10-07T04:13:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Investigation into Contrail Observability from Different Satellite Platforms
Euchenhofer, Marlene V.
Contrails are line-shaped ice clouds that can form behind aircraft engines and, under certain cold and moist conditions, spread into contrail cirrus that persists for several hours. By adding to the existing cloud cover, contrails can act to either cool or warm, with the latter, on average, being dominant, resulting in an overall warming effect. Although the effective radiative forcing from contrails is inferred to be of the same order of magnitude as that caused by aviation’s CO₂ emissions, large uncertainties remain around specific radiative forcing estimates. &#13;
Observational studies of contrails, either to support climate impact assessments or operational contrail avoidance strategies, face trade-offs between spatial and temporal resolution. Many recent publications have relied on data from geostationary satellites accepting lower input data resolution in exchange for higher temporal resolution and greater spatial coverage. Limitations of the observability of contrails in the resulting images have not been sufficiently investigated and need to be assessed and quantified.&#13;
This study aims to leverage the higher spatial resolution of VIIRS satellite imagery to identify potential limitations on contrail observability in lower-resolution GOES ABI imagery. We generate a dataset of human-identified contrails visible in false-color thermal infrared imagery from both GOES ABI and VIIRS for twelve scenes over the contiguous US. Based on this dataset, we investigate the number, cover, and appearance of the observed contrails. We find that GOES ABI does not resolve 80% of all contrails that can be identified in VIIRS imagery and only shows half of the total observed contrail length. Finally, incorporating an existing contrail-flight matching algorithm by Barbosa, we show that VIIRS tends to resolve more younger contrails than GOES ABI. The findings from this study help to bound the validity of current contrail simulations and modeling outputs that estimate contrail cover and occurrence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution</title>
<link href="https://hdl.handle.net/1721.1/162924" rel="alternate"/>
<author>
<name>Zhang, Sophie S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162924</id>
<updated>2025-10-07T04:12:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">MINCE: Dialect-Aware SQL Decomposition for Federated Query Execution
Zhang, Sophie S.
The increasing adoption of specialized database systems has led to the rise of heterogeneous data environments. While having multiple engines in a data infrastructure enables opportunities for workload optimization, SQL dialect incompatibility makes workload migration difficult. To address this challenge, we develop MINCE (Multi-dialect INtegration and Crossengine Execution), a technique that decomposes SQL queries into parts to enable federated execution across engines with differing SQL dialects. MINCE uses a rule-based method to partition a query into executable components that are assigned to different database systems. To evaluate different execution strategies, MINCE further implements a cost model that incorporates both on-engine query execution time and inter-system data transfer overhead. We evaluate MINCE on a TPC-H-based workload augmented with PostgreSQL-specific functions unsupported in Amazon Redshift. Experimental results show that MINCE produces the fastest execution strategy among our baselines for 72.1% of queries using estimated cardinality, achieving a 2× speedup over single-engine baselines. With perfect cardinality information available to our cost model, this value increases to 88.4%, with an average 2.8× speedup. These results demonstrate that our system not only enables more flexible federated query execution, but also reliably identifies performant execution strategies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New results in canonical polyadic decomposition overfinite fields</title>
<link href="https://hdl.handle.net/1721.1/162923" rel="alternate"/>
<author>
<name>Yang, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/162923</id>
<updated>2025-10-07T04:13:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">New results in canonical polyadic decomposition overfinite fields
Yang, Jason
Canonical polyadic decomposition (CPD) consists of expressing a tensor (multidimensional array) as a sum of several rank-1 tensors, each of which is an outer/separable product of vectors. The number of rank-1 tensors used in a CPD is called the rank of the CPD, and the minimum possible rank of a CPD for a given tensor is called the rank of the tensor. CPD is at the core of fast matrix multiplication, a computational problem with widespread implications across several seemingly unrelated problems in computer science. Much recent progress in this field has used randomized heuristic search to find new CPDs, often over a finite field. However, if these techniques fail to find a CPD with low enough rank, they cannot prove that no such CPD exists. Consequently, these methods fail to resolve certain long-standing questions, such as whether the tensor corresponding to 3 × 3 matrix multiplication has rank less than 23. To make progress on these problems, we develop a novel algorithm that preserves exactness, i.e. they can provably verify whether or not a given tensor has a specified rank. Compared to brute force, when searching for a rank-R CPD of a n0 × · · · × nD−1-shaped tensor over a finite field F, where n0 ≥ · · · ≥ nD−1, our algorithm saves a multiplicative factor of roughly |F| R(n0−1)+n0( P d≥1 nd) . Additionally, our algorithm runs in polynomial time. We also find a novel algorithm to search border CPDs, a variant of CPDs that is also important in fast matrix multiplication. Finally, we study the maximum rank problem and give new upper and lower bounds, both for families of tensor shapes and specific shapes. Although our CPD search algorithms are still too slow to resolve the rank of 3 × 3 matrix multiplication, we are able to utilize them in this problem by adding extra search pruners that do not affect exactness or increase asymptotic running time.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parameter Estimation for Anonymous Hawkes Processes</title>
<link href="https://hdl.handle.net/1721.1/162922" rel="alternate"/>
<author>
<name>Wang, William</name>
</author>
<id>https://hdl.handle.net/1721.1/162922</id>
<updated>2025-10-07T04:13:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Parameter Estimation for Anonymous Hawkes Processes
Wang, William
Hawkes Processes are self-exciting point processes used to model many real-life networks in which an event from one agent causes the rate at which events occur from related agents to increase, such as in earthquake networks or social media. This project investigates the question of finding the underlying structure of the Hawkes Processes given a history of when events occurred. This problem has been studied extensively in the regime where the event labels are known, and the bulk of the literature involves parameterizing the model and passing it through statistical learning tools. Our proposed work focuses on the the same question in “anonymous" case where labels are not given. In this regime, the lack of information makes many previous approaches intractable and we develop novel non-parametric approaches for solving cases of the structure learning problem in algorithmic and information theoretic settings. Our results show the ability to learn the entire model under mild assumptions in the information theoretic regime, where we have access to an arbitrarily long Anonymous Hawkes Process transcript, whereas when we’re confined to a polynomially lengthed transcript, the situation is considerably more difficult.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organization Infrastructure for Tokenized Asset Records</title>
<link href="https://hdl.handle.net/1721.1/162921" rel="alternate"/>
<author>
<name>Whartenby, Patrick E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162921</id>
<updated>2025-10-07T04:13:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Organization Infrastructure for Tokenized Asset Records
Whartenby, Patrick E.
The Tokenized Asset Record (TAR) represents a way to connect existing technology related to tokenized assets and asset schemas to real-world documents that validate the existence of an object. Exactly who should manage TARs and the properties of the related organization schemes remains an open question. Answering this question is crucial to furthering the existing digital economy. While existing solutions have sought to expand digital commerce through pioneering digital clearing houses, little work has explored support for other classes of real-world digitized assets with proof of ownership and existence. The research proposed here seeks to answer this question by suggesting possible solutions and developing a framework for uniformly analyzing the proposals. The research proposes and evaluates three models for the management of TARs. The first is a scheme that involves each industry setting up its own TAR database and managing the system independently from other industries. The second proposes hosting all TARs on a single blockchain. The third argues for an off-chain decentralized platform to host all, akin to the Data Spaces proposed by the European Union. The research finds, based on the proposed criteria, that a decentralized off-chain approach best meets the goals of a TAR management framework.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective</title>
<link href="https://hdl.handle.net/1721.1/162920" rel="alternate"/>
<author>
<name>Thadawasin, Pakaphol</name>
</author>
<id>https://hdl.handle.net/1721.1/162920</id>
<updated>2025-10-07T04:13:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Unveiling Phenotype–Genotype Interplay with Deep&#13;
Learning Foundation Models for scRNA-seq: A&#13;
Quantitative Perspective
Thadawasin, Pakaphol
Foundation models have emerged as powerful tools for analyzing single-cell RNA sequencing (scRNA-seq) data, leveraging large-scale pretraining to capture complex gene expression patterns. However, a comprehensive quantitative framework for understanding the interplay between phenotypes and genotypes remains underdeveloped. Such a framework is critical not only for validating model performance but also for uncovering previously unrecognized biological relationships. In this work, we present both traditional and deep learning-based quantitative analysis pipelines for PolyGene [1], a transformer-based scRNA-seq foundation model, aimed at disentangling the complex phenotype–genotype relationship. First, we implement a top-k classification and entropy evaluation pipeline to serve as a primary validation framework. Our results demonstrate that the pretrained PolyGene [1] is robust in top-k classification metrics and provides meaningful insights into the entropy landscape of human cells across different life stages. Second, we propose a novel deep learning gradientbased gene selection method designed to address limitations in traditional feature selection approaches, such as poor scalability and sensitivity to heterogeneity in high-dimensional data. Through empirical evaluations on benchmark scRNA-seq datasets, we show that our method enhances model interpretability and improves downstream performance, offering a more scalable and biologically relevant alternative to existing techniques. Overall, this work introduces a set of quantitative analysis tools that fill a critical gap in evaluating and interpreting scRNA-seq foundation models, contributing to a deeper understanding of the genotype–phenotype interplay through modern deep learning techniques.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems</title>
<link href="https://hdl.handle.net/1721.1/162919" rel="alternate"/>
<author>
<name>Zen, Hilary</name>
</author>
<id>https://hdl.handle.net/1721.1/162919</id>
<updated>2025-10-07T04:13:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deepfake Face Detection: An Ensemble Framework for Generalized Classification in Biometric Verification Systems
Zen, Hilary
Generation methods for deepfake images have advanced rapidly, and deepfake face images pose a critical security for biometric verification systems. Applications that rely on face recognition to grant access to sensitive data need to maintain high accuracy across a wide variety of deepfake generation methods, including novel and developing types that the application has not previously trained on. Current deepfake detection models achieve nearperfect accuracy on benchmark datasets, but do not perform as well on unseen types of deepfakes that were not part of their training dataset. We propose building an ensemble model with multiple base detectors, each trained on different generation model families to maintain high performance across many deepfake generation methods. Using four base models, including two models with the same architecture and training data, we exhaustively test all possible ensemble models. We find that combining similar base models trained on the same deepfake generation family does not improve performance compared to the individual base models. However, combining base models trained on different deepfake generation families leads to significant increases in accuracy and recall. Our ensemble framework provides a flexible and inexpensive solution in the ever-changing landscape of deepfake generation and security.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Should Model Updates Propagate?</title>
<link href="https://hdl.handle.net/1721.1/162918" rel="alternate"/>
<author>
<name>Struckman, Isabella Marguerite</name>
</author>
<id>https://hdl.handle.net/1721.1/162918</id>
<updated>2025-10-07T04:13:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">When Should Model Updates Propagate?
Struckman, Isabella Marguerite
AI supply chains rely increasingly on downstream developers adapting pretrained upstream models. When upstream models are retrained with data deletions (which may be prompted by copyright violations, privacy compliance, or removal of illicit content), it’s unclear if all downstream developers must also undergo costly retraining. In this thesis, we investigate the propagation of data deletions through fine-tuned models within a controlled visual classification setting comprising dog-breed and plane-manufacturer recognition tasks. We show that not all model updates propagate equivalently to downstream tasks, and there is a strong relationship between the deleted data’s relationship to the downstream task and its affect on the downstream model. We demonstrate that neither simple performance metrics (accuracy or F1), nor output-level divergences, nor even embedding-based similarity metrics alone adequately predict when a deletion meaningfully impacts downstream tasks. To overcome these limitations, we introduce an information-theoretic metric grounded in Gaussian mixture modeling (GMM) of embedding distributions, capturing deeper representational shifts. Our proposed approach precisely distinguishes when deletions require downstream retraining, achieving high predictive accuracy and recall without directly accessing retrained downstream models.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin Modeling for NV Magnetometry</title>
<link href="https://hdl.handle.net/1721.1/162917" rel="alternate"/>
<author>
<name>Rich, John P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162917</id>
<updated>2025-10-07T04:13:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Twin Modeling for NV Magnetometry
Rich, John P.
This thesis presents the development and application of a digital twin modeling framework for nitrogen-vacancy (NV) center-based magnetometry, advancing the field of quantum sensing. A surrogate model serves as a computational representation of the physical NV magnetometer system, enabling comprehensive exploration of parameter spaces to optimize device design. Leveraging machine learning techniques, this study optimizes control mechanisms, including the design of learned analog filters, to enhance system performance. This research investigates the fundamental limits of NV magnetometer performance, identifying strategies to minimize power requirements while maintaining high sensitivity. A dynamic framework is implemented to update the surrogate model’s parameters in real-time based on experimental measurements, ensuring accurate fidelity to the physical system. Additionally, the optimized control strategies are simulated within the digital twin environment, demonstrating their potential for advanced quantum sensing applications such as magnetocardiography (MCG) for heartbeat detection.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fuzzing for User-Schedulable Languages</title>
<link href="https://hdl.handle.net/1721.1/162916" rel="alternate"/>
<author>
<name>Moon, Kenneth</name>
</author>
<id>https://hdl.handle.net/1721.1/162916</id>
<updated>2025-10-07T04:13:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fuzzing for User-Schedulable Languages
Moon, Kenneth
Performance engineers restructure programs to use hardware as efficiently as possible. Even simple mathematical functions can become sprawling and complex programs when fully optimized, as the resulting code must often be precisely molded around specialized behaviors supported by the hardware. To help performance engineers deal with this complexity, userschedulable languages provide scheduling operations, which are abstractions of common steps taken to restructure programs. By composing these scheduling operations, performance engineers can concisely represent their intended optimizations to programs. Exo, being a user-schedulable language, provides this abstraction with the additional guarantee that any scheduling operation which passes Exo’s automated checks does not change the behavior of the program. Though this guarantee is useful for avoiding bugs while optimizing a program, the analysis required to provide such a guarantee is infeasible on programs in general. To make analysis feasible, Exo only allows users to write programs with a restricted set of behaviors. As a result, some programs are impossible to schedule using Exo, limiting the use cases of Exo. In this thesis, we explore how fuzzing can be used as an alternative to the existing analysis in Exo, with the goal of allowing Exo to analyze more complex programs. “Fuzzing” refers to a test case-driven approach to determining properties of a program, such as whether its behavior changes after a scheduling operation. If the program’s outputs do not change after the scheduling operation when provided the same inputs, the fuzzer concludes that the program’s behavior did not change. Since fuzzing only requires us to know how to evaluate the program, it can be applied to a much broader set of programs than the existing analysis in Exo. However, fuzzing can miss mistakes in scheduling if the fuzzer fails to find a test case demonstrating the issue with a scheduling operation, as it is a complete form of analysis rather than a sound form of analysis like the existing analysis in Exo. Additionally, fuzzing can be costly compared to the original analysis, as repeatedly running programs on many test cases for many scheduling operations can be slow. We explore ways to mitigate these issues throughout this work. Finally, we evaluate our implementation of the fuzzer and its performance on some example use cases for Exo.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Verifiable Computation Made Easy</title>
<link href="https://hdl.handle.net/1721.1/162915" rel="alternate"/>
<author>
<name>Ma, Chengyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162915</id>
<updated>2025-10-07T04:12:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient Verifiable Computation Made Easy
Ma, Chengyuan
Recent advancements in cloud computing, data privacy, and cryptography have sparked a growing interest in Verifiable Computation (VC) in both industry and academia. In particular, zero-knowledge proof (ZKP) algorithms are gaining rapid traction due to their strong privacy guarantees. However, they are notoriously computationally intensive, making performance a critical concern. Given the inherent data parallelism and heavy use of vector operations in ZKP computations, multicore CPUs and GPUs offer a promising acceleration path. Unfortunately, accelerated programming for ZKP remains challenging: ZKP algorithms evolve rapidly, their structures grow increasingly complex, and writing high-performance ZKP code is tedious, error-prone, non-portable, and unfriendly to algorithm developers. We present an end-to-end compiler framework, Zera, that lowers ZKP algorithms to parallel hardware for efficient acceleration, with minimal programmer effort. By effectively leveraging ZKP algorithm patterns and trends, we are able to automate the key performance optimizations, with a succinct linguistic extension and a set of practical compiler customizations. Consequently, with just 92 lines of trivial high-level annotation added to the original 7,000 lines of C++ code, our single-source code solution delivers 33.9× and 24.0× speedup on GPU over a highly optimized serial C++ implementation on CPU and an existing multithreaded Rust baseline on CPU, respectively. Compared to our hand-optimized GPU/CUDA implementation requiring an extra 2,000 lines of low-level code (roughly 60 programmer hours), our compiler-generated GPU implementation is only 58% slower (1.58× slowdown) on large inputs, demonstrating a compelling trade-off between performance and productivity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Partitioning for Efficient Parallel Reads</title>
<link href="https://hdl.handle.net/1721.1/162914" rel="alternate"/>
<author>
<name>Sragow, John</name>
</author>
<id>https://hdl.handle.net/1721.1/162914</id>
<updated>2025-10-07T04:13:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing Partitioning for Efficient Parallel Reads
Sragow, John
Modern database management systems spend a significant portion of query execution time scanning data, so minimizing scanning latency is critical to maintaining high performance. As such, databases are partitioned into blocks so that queries can skip irrelevant tuples and avoid scanning the entire database. When this partitioning is optimized to minimize the number of blocks accessed by each query, smaller queries that access very few blocks fail to fully utilize the bandwidth because they cannot take advantage of parallel reading. However, reducing the size of each block in order to increase the number of blocks accessed by smaller queries slows down larger queries by forcing them to increase the number of I/Os they must perform. We propose a novel partitioning scheme that shuffles the row groups of blocks accessed by smaller queries so that they can read fewer tuples from multiple blocks in parallel without increasing the I/O cost of larger queries. Our experiments show that this technique allows smaller queries to be scanned up to twice as fast on larger block sizes as they would on a standard partitioning without significantly slowing down larger queries.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models</title>
<link href="https://hdl.handle.net/1721.1/162913" rel="alternate"/>
<author>
<name>Tang, Adrina</name>
</author>
<id>https://hdl.handle.net/1721.1/162913</id>
<updated>2025-10-07T04:12:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Functional Knowledge into Protein Design: A Novel Approach to Tokenization and Noise Injection for Function-Aware Protein Language Models
Tang, Adrina
Designing novel proteins with specific biological functions remains a fundamental challenge in computational biology. While recent advances in protein language models have enabled powerful sequence-based representations, most models, including state-of-the-art systems like ESM3, fall short in effectively encoding functional context during protein generation. In this work, we present a multimodal protein co-design framework that conditions sequence generation on fine-grained functional annotations, specifically leveraging residue-level Gene Ontology (GO) term labels on sequences from the UniRef100 database. By explicitly associating functional signals with residue elements of proteins, our model learns to generate function-conditioned protein sequences that are biologically plausible and semantically consistent. Unlike prior approaches, which treat function as a secondary feature or a classification task, our method focuses on joint reasoning over function and sequence during the design process. This closes a critical gap in the current landscape of protein design tools, offering a scalable and generalizable approach to co-designing protein sequences with user-specified functional profiles.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pairwise Matching of Intermediate Representations for Fine-grained Explainability</title>
<link href="https://hdl.handle.net/1721.1/162912" rel="alternate"/>
<author>
<name>Shrack, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/162912</id>
<updated>2025-12-10T00:52:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pairwise Matching of Intermediate Representations for Fine-grained Explainability
Shrack, Lauren
The differences between images belonging to fine-grained categories are often subtle and highly localized, and existing explainability techniques for deep learning models are often too diffuse to provide useful and interpretable explanations. We propose a new explainability method (PAIR-X) that leverages both intermediate model activations and backpropagated relevance scores to generate fine-grained, highly-localized pairwise visual explanations. We use animal and building re-identification (re-ID) as a primary case study of our method, and we demonstrate qualitatively improved results over a diverse set of explainability baselines on 35 public re-ID datasets. In interviews, animal re-ID experts were in unanimous agreement that PAIR-X was an improvement over existing baselines for deep model explainability, and suggested that its visualizations would be directly applicable to their work. We also propose a novel quantitative evaluation metric for our method, and demonstrate that PAIR-X visualizations appear more plausible for correct image matches than incorrect ones even when the model similarity score for the pairs is the same. By improving interpretability, PAIR-X enables humans to better distinguish correct and incorrect matches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning for Space Object Density Distribution&#13;
Prediction</title>
<link href="https://hdl.handle.net/1721.1/162910" rel="alternate"/>
<author>
<name>Sarangerel, Sumiyajav</name>
</author>
<id>https://hdl.handle.net/1721.1/162910</id>
<updated>2025-10-07T04:12:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Deep Learning for Space Object Density Distribution&#13;
Prediction
Sarangerel, Sumiyajav
The rapid growth of artificial objects in Low Earth Orbit (LEO) has heightened concerns over orbital congestion and collision cascades, known as Kessler Syndrome. Traditional high-fidelity models, while accurate, are computationally intensive and poorly scalable. This thesis introduces a machine learning–based framework for forecasting the long-term evolution of space object density. A large dataset is generated, using the MIT Orbital Capacity Assessment Tool – Monte Carlo (MOCAT-MC), simulating thousands of scenarios across varying launch, disposal, and maneuver parameters. A Convolutional Gated Recurrent Unit (ConvGRU) is trained to predict density distributions over a 100-year horizon, achieving accurate forecasts with significantly reduced runtime. With a simple guidance mechanism, the generalization capability of the model across diverse scenarios is greatly improved. This approach offers a scalable and efficient tool for supporting future space traffic management and sustainability efforts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning</title>
<link href="https://hdl.handle.net/1721.1/162909" rel="alternate"/>
<author>
<name>Shi, Yichuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162909</id>
<updated>2025-10-07T04:12:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Understanding Privacy Leakage in Decentralized and Collaborative Learning
Shi, Yichuan
The emergence of large-scale machine learning (ML) models has highlighted a fundamental conflict: While computational demands push for the consolidation of data and models in vast, centralized data centers, real-world data continues to be distributed and fragmented across personal devices and private databases. How can we reconcile this contradiction without further monopolizing the ML ecosystem? What unique privacy and security risks arise from alternative ML orchestration system designs? Furthermore, how do these vulnerabilities and system failures inform our understanding of both how and what machines learn? This thesis attempts to explore these questions. It first examines key types of privacy leakages, evaluating their impact under realistic, cross-distribution settings. It then introduces a benchmarking analysis platform, SONAR, to investigate the relationship between privacy leakage (measured by attack performance), network topology, and data distribution. Finally, it presents Co-Dream, a novel algorithm for collaborative learning that offers improved privacy characteristics.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prototyping a Scalable Proof Engine</title>
<link href="https://hdl.handle.net/1721.1/162908" rel="alternate"/>
<author>
<name>Rosario, Jon</name>
</author>
<id>https://hdl.handle.net/1721.1/162908</id>
<updated>2025-10-07T04:12:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prototyping a Scalable Proof Engine
Rosario, Jon
Formal verification is an exciting development in software engineering, enabling implementations of programs to be rigorously checked against mathematical specifications. Assuming the specification is well-defined, formal verification provides guarantees of a program’s correctness and freedom from bugs that are simply not possible with test-based methods. There’s just one catch: the process of verifying large programs in popular theorem provers such as Coq (now known as Rocq) or Lean is painfully slow. These proof assistants rely on proof engines to construct proofs of correctness for given properties, but to our knowledge, there is no widely available proof engine that offers strong performance guarantees. Even more frustrating is the lack of consensus on what “good” performance should even mean in this context. This thesis lays the groundwork for addressing that gap by presenting a proof engine design that achieves asymptotically linear-time performance with respect to several important variables. We illustrate the design and its performance characteristics with examples from an implementation of the design and outline directions for future work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis</title>
<link href="https://hdl.handle.net/1721.1/162905" rel="alternate"/>
<author>
<name>Paulin, Cole J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162905</id>
<updated>2025-10-07T04:13:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stress-Guided Material Segmentation for Recycled 3D&#13;
Printed Structures Using Finite Element Analysis
Paulin, Cole J.
We present a simulation-driven method for optimizing the structural performance of 3D printed objects made with recycled and fresh filament. Although sustainable materials such as recycled PLA reduce environmental impact, they often exhibit degraded or inconsistent mechanical properties, making them less suitable for structurally demanding applications. To address this, we develop a finite element analysis (FEA) pipeline that simulates stress and strain distributions under user-defined loading conditions, enabling intelligent segmentation of the object into regions of high and low mechanical demand. These segmented regions can be assigned recycled or fresh material during fabrication. Our system leverages open-source tools (SfePy) for simulation and we validate its accuracy against Abaqus, a commercial industry standard. We also introduce methods for automatically identifying and correcting segmentation artifacts, such as small disconnected islands. Through comparative simulation studies and performance evaluation, we demonstrate that our approach enables more sustainable 3D printing without sacrificing structural reliability
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System</title>
<link href="https://hdl.handle.net/1721.1/162904" rel="alternate"/>
<author>
<name>Lohier, Sebastien</name>
</author>
<id>https://hdl.handle.net/1721.1/162904</id>
<updated>2025-10-07T04:12:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metaheuristic Optimization for Automatic Arrangement of Power Electronics Components in a Shipboard Electrical Distribution System
Lohier, Sebastien
This thesis proposes a novel methodology for the automatic placement of Power Electronics Building Blocks (PEBBs) in modular, integrated power corridor designs. These building blocks, which are created and tested offsite for a variety of applications, are currently placed manually during the design process, a method that is time-consuming and suboptimal. To address this challenge, we reduce the placement problem to a 2D bin-packing problem, leveraging a hybrid approach combining Genetic Algorithms and Simulated Annealing. This approach enables the generation of optimized placements that find the extremes of arbitrary heuristics, including minimizing routing distance and power density, effectively improving both design efficiency and system performance. The proposed methodology offers a significant step toward automating and optimizing the layout of power electronic components in complex systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes</title>
<link href="https://hdl.handle.net/1721.1/162752" rel="alternate"/>
<author>
<name>Lee, Ju Young</name>
</author>
<id>https://hdl.handle.net/1721.1/162752</id>
<updated>2025-09-19T04:50:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Interpretable Multimodal Framework for Regional&#13;
Organ Transplantation Outcomes
Lee, Ju Young
The demand for kidney transplants continues to outpace supply, with over 89,792 patients on the waitlist as of September 2024, yet only 27,332 transplants performed in 2023 [1], and 28% of recovered kidneys going non-utilized [2]. In this thesis, we highlight the use of large language model (LLM) embeddings combined with structured tabular data to build a predictive classifier that estimates offer outcomes for kidney donor-recipient matches. For each predictive model deployed, we provide further analysis on the interpretability of these black-box models using a custom-designed SHAP analysis framework. Our study focuses on three distinct U.S. regions (Regions 1,2,3) with markedly different demographics and amounts of data on organ acceptances (Region 1: 43,126 offers with 2.19% acceptance rate, Region 2: 394,640 offers with 1.57% acceptance rate, Region 3: 169,342 with 2.23% acceptance rate in years 2016-2019). Among the baseline XGBoost models, Region 3 achieved the highest performance, with a precision-accept score of 0.929 and accuracy of 0.993 in the test data. Building on this strong foundation, the multimodal TabText model in Region 3 achieved the best performance overall, with a precision-accept score of 0.959 and accuracy of 0.993 after fine-tuning for six epochs. Our findings suggest that increasing the number of text features, extending training epochs, and incorporating explicit numerical values led to improved model performance in Region 3. In Regions 1 and 2, the baseline model outperformed the TabText model, suggesting that data sparsity in these regions may have limited the effectiveness of the multimodal approach and that further hyperparameter tuning is needed. We also present several visualization techniques to enhance model interpretability. Specifically, we developed a novel SHAP explainer that illustrates feature interactions between multimodal inputs, including both tabular and textual data. Additionally, we explored methods to identify regions of high and low model fidelity by mapping per-sample prediction errors onto t-SNE embeddings. Overall, this thesis introduces new directions for transplant research in the context of transformer-based models and interpretable AI. Leveraging data-driven decision-support tools and refining allocation policies are essential steps toward addressing the persistent gap between supply and demand in the kidney transplant landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Medium Access Control Protocol for Satellite Constellations</title>
<link href="https://hdl.handle.net/1721.1/162751" rel="alternate"/>
<author>
<name>Li, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/162751</id>
<updated>2025-09-19T04:50:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Medium Access Control Protocol for Satellite Constellations
Li, Brian
Satellite internet constellations have emerged as a promising solution for providing global internet connectivity, especially in regions underserved by terrestrial infrastructure. However, as user demand increases, especially in densely populated urban areas, existing Medium Access Control (MAC) protocols face significant scalability challenges and fail to take advantage of advanced antenna processing techniques, including phased array nulling, as well as capacity sharing via inter-satellite links.&#13;
We present both an offline linear program and a novel online greedy MAC protocol to assign satellite resources to users using either sequential service, capacity sharing, or interference-aware nulling. Our offline formulation provides an upper bound on system performance, and while our online protocol is sub-optimal compared to this optimum, it is designed to be implementable on a real-time system. Simulations demonstrate that incorporating nulling can increase effective capacity by up to 25 times, substantially boosting profit in high-demand scenarios. We further quantify the performance gap between the online protocol and the offline optimum under varying demand distributions, showing that our online approach achieves near-optimal results in low-peakiness settings and gracefully degrades under more extreme conditions. These results highlight the importance of spatial processing at the MAC layer and offer practical design insights for future satellite internet constellations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs</title>
<link href="https://hdl.handle.net/1721.1/162750" rel="alternate"/>
<author>
<name>Kong, Blisse</name>
</author>
<id>https://hdl.handle.net/1721.1/162750</id>
<updated>2025-09-19T04:50:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Copilot Tutor: Automated Software Engineering Practice&#13;
Augmented with LLMs
Kong, Blisse
In recent years, large language models (LLMs) have become more ubiquitous in the workplace. In software engineering, they are often realized as “copilots" which produce code given a prompt or existing code. Programmers using these tools to increase their coding productivity need to be proficient in inspecting and in understanding these copilots’ outputs. As engineers incorporate these tools to accelerate their workflows, they have a parallel opportunity to accelerate learning new programming languages. This thesis presents a tutor interface where students with some programming experience in an origin language can learn a target language while practicing how to critically read and fix a copilot’s output to write correct, safe programs. This work also introduces the automatic generation of exercises teaching syntax and semantics on which a programmer experienced in the origin language but not the target language should focus.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Physical Withholding of Renewable Energy&#13;
Generators</title>
<link href="https://hdl.handle.net/1721.1/162749" rel="alternate"/>
<author>
<name>Irvine, Paul M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162749</id>
<updated>2025-09-19T04:50:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Physical Withholding of Renewable Energy&#13;
Generators
Irvine, Paul M.
Renewable generators may have incentives to strategically withhold energy output in electricity markets, either to exercise market power or to avoid congestion pricing caused by transmission constraints. Although academic work often treats renewables as not downward dispatchable, renewable generators technically can, at least in principle, reduce their output by self-curtailing. This paper shows that a firm with a large, diverse portfolio could find it profit-maximizing to withhold renewables over conventional thermal generators once it accounts for constraints on ramp rates and minimum generation, as well as the costs associated with starting-up generators and the probability of detection on generator type by market monitoring authorities. Long-term forward contracts like pay-as-produced Power Purchase Agreements (PPAs) can blunt incentives to exercise market power by insulating individual generators from wholesale prices; however, since generators under PPAs typically bid into the wholesale market and influence competitive prices, they may actually encourage renewable withholding if contract prices are sufficiently low and the parent firm’s portfolio is exposed to wholesale prices. To screen for renewable withholding, this paper proposes three methods: (1) examining the distribution of aggregate output across export interfaces for suspicious bunching, (2) testing deviations from ex-ante forecasts, and (3) identifying the time intervals where generators encounter model structural changes compared to a benchmark presumed free of withholding. Together, this work prepares academics and regulators to more accurately model the behavior of renewable generators in electricity markets and to screen for potential market abuses.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Argos: Verifiable FHE Using Commodity Hardware</title>
<link href="https://hdl.handle.net/1721.1/162748" rel="alternate"/>
<author>
<name>Jepsen, Fisher</name>
</author>
<id>https://hdl.handle.net/1721.1/162748</id>
<updated>2025-09-19T04:49:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Argos: Verifiable FHE Using Commodity Hardware
Jepsen, Fisher
We present Argos, a simple approach for adding verifiability to fully homomorphic encryption (FHE) schemes using trusted hardware. Traditional approaches to verifiable FHE require expensive cryptographic proofs, which incur an overhead of up to seven orders of magnitude on top of FHE, making them impractical. With Argos, we show that trusted hardware can be securely used to provide verifiability for FHE computations, with minimal overhead relative to the baseline FHE computation. An important contribution of Argos is showing that the major security pitfall associated with trusted hardware, microarchitectural side channels, can be completely mitigated by excluding any secrets from the CPU and the memory hierarchy. This is made possible by focusing on building a platform that only enforces program and data integrity and not confidentiality (which is sufficient for verifiable FHE, since all data remain encrypted at all times). All secrets related to the attestation mechanism are kept in a separate coprocessor (e.g., a TPM)—inaccessible to any software-based attacker. Relying on a discrete TPM typically incurs significant performance overhead, which is why (insecure) software-based TPMs are used in practice. As a second contribution, we show that for FHE applications, the attestation protocol can be adapted to only incur a fixed cost. Argos requires no dedicated hardware extensions and is supported on commodity processors from 2008 onward. Our prototype implementation introduces 3% overhead for FHE evaluation, and 8% for more complex protocols. In particular, we show that Argos can be used for real-world applications of FHE, such as private information retrieval (PIR) and private set intersection (PSI), where providing verifiability is imperative. By demonstrating how to combine cryptography with trusted hardware, Argos paves the way for widespread deployment of FHE-based protocols beyond the semi-honest setting, without the overhead of cryptographic proofs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework</title>
<link href="https://hdl.handle.net/1721.1/162747" rel="alternate"/>
<author>
<name>Kumar, Aryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162747</id>
<updated>2025-09-19T04:49:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Automatic Conversion of C and C++ Programs to the BuildIt Multi-Stage Programming Framework
Kumar, Aryan
BuildIt allows users to write C++ programs that can execute in multiple stages, where the output of one stage is the program source for the next stage, ending with some final output produced. This is particularly useful for writing specialized code and generating code for domain-specific languages. While there are other approaches to multi-stage programming, BuildIt has several advantages: it takes a library-based approach (so it requires no modifications to the compiler and is thus highly portable), and it has excellent ease of use as all the user has to do is change the declared types of variables in their C++ program. The goal of this thesis is to further improve BuildIt’s ease of use by simplifying this step: in particular, by developing a tool that will automatically convert existing C and C++ programs to the BuildIt framework. We show how to use Clang tooling in conjunction with modifications to the Clang compiler to perform non-trivial modifications to source, namely type-modification, to automatically convert code to its unstaged BuildIt equivalent. As the unstaged BuildIt code can be specialized by staging certain variables, this tool will ultimately enable more easily staging and optimizing C/C++ repositories with the BuildIt framework.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients</title>
<link href="https://hdl.handle.net/1721.1/162746" rel="alternate"/>
<author>
<name>Jung, Emma Yejoo</name>
</author>
<id>https://hdl.handle.net/1721.1/162746</id>
<updated>2025-09-19T04:49:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Association of GLP-1 Receptor Agonist Use with Kidney and Cardiovascular Outcomes in Stable Kidney Transplant Recipients
Jung, Emma Yejoo
Recent surges in the use of glucagon-like peptide-1 receptor agonists (GLP-1RA) have shown promise in reducing cardiovascular events and improving kidney function in patients with type 2 diabetes. Due to these hopeful improvements, kidney transplant recipients (KTRs) have started using GLP-1RA. However, their effects in KTRs remain largely unstudied in clinical studies. This thesis uses a large-scale Electronic Health Record (EHR) database to perform a retrospective cohort analysis to study the association between GLP-1RA use and kidney and cardiovascular outcomes amongst stable KTRs. Primary outcomes include all-cause mortality, major adverse kidney events (MAKE), and major adverse cardiac events (MACE). Among stable KTRs, GLP-1RA users show reduced risk for all-cause mortality (adjusted hazard ratio [aHR]: 0.45; 95% confidence interval [CI]: 0.32-0.62) and MAKE (aHR: 0.69; 95% CI: 0.58-0.81), but no significant difference for MACE (aHR: 0.84; 95% CI: 0.67-1.05). In addition, users show increased risk for irritable bowel syndrome (IBS) (aHR: 2.11; 95% CI: 1.07-4.15) and urinary tract infection (UTI) (aHR: 1.53; 95% CI: 1.27-1.85). These results indicate the potential of GLP-1RA to reduce mortality and adverse kidney outcomes and increase IBS and UTI in KTRs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians</title>
<link href="https://hdl.handle.net/1721.1/162745" rel="alternate"/>
<author>
<name>Kahler, Kailas B.</name>
</author>
<id>https://hdl.handle.net/1721.1/162745</id>
<updated>2025-09-19T04:49:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hardware Acceleration for Real-Time Compression of 3D&#13;
Gaussians
Kahler, Kailas B.
3D Gaussian Splatting (3DGS) is a technique for novel view synthesis, where images of a scene from a specific viewpoint are generated using images from different viewpoints, that has gained popularity for its reduced computational overhead, resulting in faster training and rendering times compared to other methods like Neural Radiance Fields (NeRFs). Its applications outside of strictly novel view synthesis have also been explored, with monocular simultaneous localization and mapping (SLAM) in robotics being an emergent application. However, because of limited on-board battery capacity, the computer hardware used in small robots is much less capable than the high-powered GPUs that the 3DGS algorithm was originally developed on, having both less compute and memory capacity and bandwidth. While there has been work developing specialized compute for the rendering pipeline of 3DGS, memory remains an obstacle to deployment. The Gaussian map can occupy from 1MB − 700MB in memory, which is both too large to store on-chip within micro-robots and such that moving Gaussians from memory to compute can dominate power consumption. While there has been prior work on algorithms for compressing Gaussian representations, they are not yet capable of running in real-time on the hardware present in these robots, as would be required for SLAM. Thus, this thesis explores the limits of these compression methods on current hardware, resulting in an optimized CUDA implementation with better than 100× the throughput of prior work and achieving real-time operation on workstation-class hardware. However, after concluding that custom hardware is necessary for further improvement, this thesis also presents a hardware accelerator that nears real-time compression performance within a reduced power budget, outperforming an NVIDIA Jetson Orin Nano with 64% higher throughput while using 1/16th of the multipliers and drawing 38% of the power when running at 100MHz on an AMD UltraScale+ FPGA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Personalization of AI Tutor Based on Knowledge Graphs</title>
<link href="https://hdl.handle.net/1721.1/162744" rel="alternate"/>
<author>
<name>Huang, Sheng</name>
</author>
<id>https://hdl.handle.net/1721.1/162744</id>
<updated>2025-09-19T04:49:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Personalization of AI Tutor Based on Knowledge Graphs
Huang, Sheng
Personalized tutoring, tailored to the specific knowledge and needs of individual students, has been shown to significantly enhance academic performance. Research by Schmidt and Moust, for example, highlights that tutors who engage with students on a personal level are more effective in guiding them toward higher academic achievement [1]. Inspired by this principle, the Axiom group at the MIT Media Lab developed an AI tutor for their Intro to Programming courses. The initial version of the tutor, RAGS, relied on analyzing past conversations between students and the tutor, as well as course content, to generate personalized responses. While this approach showed promise, it faced scalability challenges, such as the need to store an ever-growing volume of conversation history and the risk of exceeding token limits in prompt context windows. Additionally, the model occasionally struggled with over-generalization, particularly when responding to vague questions based solely on historical interactions. To address these limitations, this thesis introduces a new approach: a student knowledge graph. Rather than relying on an expanding archive of past conversations, the knowledge graph uses weighted nodes to represent a student’s understanding of each concept. A weight of -8 indicates subpar understanding, while a weight of 8 signifies mastery. After pre-processing the course data, the graph maintains a fixed size, eliminating the need for additional storage over time. This innovation solves two critical problems: &#13;
1. Scalability: By leveraging a fixed-size PostgreSQL database, the student knowledge graph avoids the storage challenges associated with saving endless conversation histories. &#13;
2. Improved Personalization: Instead of sifting through old conversations, the tutor uses concept weights to generate more precise and contextually relevant responses, even to vague questions. &#13;
Testing and evaluation of the implemented system demonstrate its effectiveness in both scalability and response quality. Over 60% of survey participants reported that the knowledge graph-enhanced tutor provided clearer and more relevant guidance, particularly when building on concepts they already understood. Additionally, over 80% of respondents noted improvements in the tutor’s ability to address weak areas and provide targeted practice, especially when preparing for quizzes or exams.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation</title>
<link href="https://hdl.handle.net/1721.1/162743" rel="alternate"/>
<author>
<name>Hadjiivanov, Michael D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162743</id>
<updated>2025-09-19T04:49:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">SPIRAL: Iterative Subgraph Expansion for Knowledge-Graph Based Retrieval-Augmented Generation
Hadjiivanov, Michael D.
Large language models (LLMs) excel at generating fluent answers but are prone to hallucination when the prompt fails to anchor them to verifiable facts. Retrieval-augmented generation (RAG) mitigates this risk, yet existing graph-based retrievers either return bloated neighborhoods or incur prohibitive latency on large knowledge graphs (KGs). We introduce SPIRAL—Supervised Prior + Iterative Reinforcement with Adaptive Labelling—a lightweight two-stage framework that constructs compact, tree-shaped evidence subgraphs. This differs from previous work in its use of a trained, iterative policy network built on top of a prior over triples, delivering improved performance on multi-hop question answering tasks. Stage 1 trains a single-label GLASS-GNN on shortest-path heuristics, producing frozen, question-aware node embeddings at negligible runtime cost with significant local topology awareness around question entities. Stage 2 layers a GLASS policy—which re-labels the partial subgraph at each step—on top of these embeddings and optimizes it with proximal policy optimization. The policy scores only the 1-hop frontier, enabling sub-second inference even on million-edge graphs. On the multi-hop KGQA benchmark WebQSP, SPIRAL attains 0.95 triple recall and 0.97 answer recall while retrieving at most 50 triples—doubling the sampling efficiency of the strongest prior work. Coupled with Llama 3.1-8B, the retrieved trees boost Hit@1 by 2.5 % over SubgraphRAG. Ablation studies confirm that adaptive labels are critical for multi-hop reasoning. SPIRAL demonstrates that accurate and concise retrieval is achievable without resorting to massive models or expensive graph crawls, opening the door to real-time, KG-grounded assistants on modest hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL</title>
<link href="https://hdl.handle.net/1721.1/162742" rel="alternate"/>
<author>
<name>Choi, Justin J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162742</id>
<updated>2025-12-09T18:27:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Injection of Domain-Specific Knowledge for Enterprise Text-to-SQL
Choi, Justin J.
This work examines the current state of using large language models (LLMs) to solve Text-to-SQL tasks on databases in an enterprise setting. Benchmarks on publicly available datasets do not fully capture the difficulty and complexity of this task in a real-world, enterprise setting. This study examines the critical steps needed to work with enterprise data as well as using knowledge-injection to enhance the performance of LLMs on Text-to-SQL tasks. We begin by evaluating the baseline performance of LLMs on enterprise databases, revealing that a predominant source of failure stems from a lack of domain-specific knowledge. To improve performance, we explore knowledge-injection: the process of incorporating internal and external knowledge. Internal knowledge consists of database-specific information such as join logic, while external knowledge refers to institutional acronyms or group names. We present a hybrid retrieval pipeline that combines embedding and text based searching with LLM-guided ranking to supply models with relevant external knowledge during Text-to-SQL generation. We evaluate the impact of the knowledge-injection by testing the performance of LLMs on the table retrieval task after being augmented with appropriate external knowledge. We demonstrate that knowledge-injection significantly improves accuracy on table retrieval using BEAVER: an enterprise-level Text-to-SQL benchmark. Our findings highlight the importance of domain-specific knowledge-injection and retrieval augmentation in bringing LLMs closer to deployment in enterprise-grade database systems, as well as common failure modes that occur when executing enterprise Text-to-SQL.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators</title>
<link href="https://hdl.handle.net/1721.1/162741" rel="alternate"/>
<author>
<name>Chomphoochan, Thanadol</name>
</author>
<id>https://hdl.handle.net/1721.1/162741</id>
<updated>2025-09-19T04:49:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating the Feasibility of Transaction Scheduling via Hardware Accelerators
Chomphoochan, Thanadol
As single-thread performance plateaus, modern systems increasingly rely on parallelism to scale throughput. Yet, efciently managing concurrency—particularly in transactional systems—remains a major bottleneck. This thesis explores the feasibility of accelerating transaction scheduling via hardware, leveraging FPGAs to ofoad scheduling logic from the CPU. We revisit Puppetmaster, a hardware transaction scheduler, and present a redesigned architecture emphasizing deployability, modularity, and evaluation. We implement both an optimized software baseline and a Bluespec-based hardware design, evaluating their performance across synthetic YCSB-style workloads with varying contention levels. Our hardware prototype demonstrates competitive throughput, achieving over 90% of peak throughput even under high-contention workloads. These results validate the potential of transaction scheduling as a target for hardware acceleration and highlight promising directions for future hybrid hardware-software concurrency-control systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection</title>
<link href="https://hdl.handle.net/1721.1/162740" rel="alternate"/>
<author>
<name>Edelman, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/162740</id>
<updated>2025-09-19T04:49:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Exploration of Thermodynamic Models of&#13;
Geological CO₂ Injection
Edelman, Jonathan
This thesis investigates the behavior of carbon dioxide flow in porous media through high-fidelity computational modeling, with a specific focus on the impact of the Span-Wagner equation of state (EOS). Accurate modeling of CO₂ transport in subsurface environments is essential for applications such as carbon capture and storage (CCS). We model the entire flow from injection, down throughout a vertical pipe and into a porous reservoir. To this end, we utilize the MOOSE (Multiphysics Object-Oriented Simulation Environment) framework developed by Idaho National Laboratory to perform finite element simulations. A key contribution of this work is the successful coupling of a porous rock domain with a one-dimensional pipe flow simulation in Julia, enabling a broader representation of injection scenarios. The study examines how the thermodynamic accuracy of the Span-Wagner Equation of State influences flow characteristics, in comparison to the Ideal Gas Equation of State. Through a series of coupled pipe-reservoir simulations, we assess variations in pressure and density as CO₂ is injected from the pipe into the porous medium. The model can detect phase change conditions, allowing us to predict the maximum mass flux that can be achieved below the liquefaction threshold, as defined by the binodal curve in the CO₂ phase diagram at a given temperature. The results highlight the importance of EOS selection in predicting multiphase flow behavior, especially under conditions relevant to geological storage. Furthermore, we find that the Ideal Gas EOS underpredicts injection rates under the same conditions. This integrated modeling approach advances the understanding of thermodynamic effects in coupled subsurface flow systems and supports the development of reliable tools for large-scale carbon storage applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data</title>
<link href="https://hdl.handle.net/1721.1/162739" rel="alternate"/>
<author>
<name>Dahleh, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/162739</id>
<updated>2025-09-19T04:49:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Clinical Text De-identification Using Large Language Models: Insights from Organ Procurement Data
Dahleh, Omar
This thesis presents a novel approach to the de-identification of clinical notes from Organ Procurement Organization (OPO) records, leveraging advanced natural language processing (NLP) methodologies. Specifically, we employ in-context learning using large language models (LLMs) to effectively identify and remove protected health information (PHI), aiming to maintain high data utility post-redaction. Our work systematically evaluates the performance of the LLM-based method against established baseline techniques, including traditional Named Entity Recognition (NER) and rules-based systems. Through a slew of experiments, we assesses the strengths and limitations of each method regarding precision and recall. This work will contribute to a uniquely extensive dataset, comprising millions of de-identified OPO clinical notes, which will facilitate ethical healthcare research and enhance compliance with contemporary data protection standards. Ultimately, this dataset holds significant potential for improving processes and outcomes within the field of organ donation and procurement.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient ML Inference via Matrix-Vector Approximations</title>
<link href="https://hdl.handle.net/1721.1/162737" rel="alternate"/>
<author>
<name>Li, Daniel D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162737</id>
<updated>2025-09-19T04:49:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Efficient ML Inference via Matrix-Vector Approximations
Li, Daniel D.
Efficient inference is a growing priority in deep learning, where large model sizes and increasing deployment demands pose challenges for latency, memory, and energy usage. This thesis presents a unified framework for evaluating approximation methods that accelerate inference by modifying weight matrices. We model each method as a function ƒ_c(A) that approximates a weight matrix A under a compression rate c, and assess its impact on both matrix–vector accuracy and downstream task performance. We conduct empirical evaluations across two representative models, AlexNet on CIFAR10 and DistilBERT on AG News, comparing quantization, sparsification, and low-rank approximations. Our analysis spans four perspectives: (1) how different methods trade off ℓ₂ error and compression, (2) how weight statistics and input distributions shape error, (3) how well ℓ₂ error predicts classification accuracy, and (4) how idealized compression differs from real memory savings. We find that sparsification offers a strong trade-off between storage and accuracy, particularly because it preserves task-relevant structure in the weights. We also show that ℓ₂ error is not always a reliable proxy for accuracy, especially when input data lie on low-dimensional manifolds. These results suggest that approximation quality must be evaluated not only by global distortion metrics, but also by how the method interacts with model structure and input distributions. Our findings offer practical guidance for deploying efficient deep learning models and shed light on how compression affects performance in real-world settings.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning</title>
<link href="https://hdl.handle.net/1721.1/162736" rel="alternate"/>
<author>
<name>Lee, Jimin</name>
</author>
<id>https://hdl.handle.net/1721.1/162736</id>
<updated>2025-09-19T04:49:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Pedagogical Multimodal System for Mathematical Problem-Solving and Visual Reasoning
Lee, Jimin
Effective reasoning often requires more than text or language. It requires visualizing, drawing, gesturing, and interacting for both humans and artificial intelligence (AI). Specifically in educational subjects, such as geometry and graphs, visual tools like auxiliary annotations and drawings can greatly help students understand abstract theories. This thesis explores and suggests how multimodal interaction between humans and AI helps humans engage with the system more naturally and effectively, leading to improved problem-solving in mathematical settings. Recent large multimodal models (LMMs) have the ability to facilitate collaborative reasoning by supporting textual, visual, and interactive inputs, diversifying methods of communication between humans and AI. Utilizing such advancements, this thesis also dives into the development of Interactive Sketchpad, a tutoring system that combines language-based explanations with interactive visualizations to enhance learning. It also reviews findings from user studies with Interactive Sketchpad, demonstrating that multimodality contributes to user task comprehension and engagement levels. Together, these contributions can reframe the role of AI in education as a visual and interactive collaborator that supports deeper reasoning rather than simply providing answers. Furthermore, this work demonstrates the potential of multimodal human-AI systems in fostering engagement and scaling personalized, visual learning across domains.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast and Scalable Subgraph Learning</title>
<link href="https://hdl.handle.net/1721.1/162735" rel="alternate"/>
<author>
<name>Liang, Derrick</name>
</author>
<id>https://hdl.handle.net/1721.1/162735</id>
<updated>2025-09-19T04:49:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fast and Scalable Subgraph Learning
Liang, Derrick
Graph Neural Networks (GNNs) are a powerful framework for learning over structured data, enabling predictive modeling across domains such as bioinformatics, recommendation systems, and financial fraud detection. While scalable systems like SALIENT++ have advanced the training of node-level GNN tasks at industrial scale, they do not support an emerging class of workloads: subgraph classification, which is increasingly common in real-world applications. Prior implementations address this gap by modifying both the data pipeline and the model architecture—but at the cost of composability, creating tightly coupled systems that slow further development. This thesis introduces MOSAIC, a lightweight data transformation that reframes subgraph classification as nodewise prediction by augmenting the graph with representative nodes. This approach enables direct compatibility with SALIENT++ and other nodewise systems while decoupling workload format, dataloader design, and model architecture. I demonstrate that MOSAIC enables modular reuse of architectures like GraphSAGE and subgraph-aware components from GLASS, while preserving SALIENT++’s system-level scalability. On the large-scale Elliptic2 dataset, this integration reduces training memory usage by 2.8× and epoch runtime from over 90 minutes to 0.4 seconds—while improving classification performance. I implement MOSAIC as a succinct (&lt;100-line), reusable preprocessing script, enabling integration of the GLASS architecture into SALIENT++ in &lt;10 lines of code, compared to Wang et al.’s tightly coupled 500+ line design. These results highlight the feasibility of scalable, composable experimentation for subgraph learning tasks in high-performance GNN systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians</title>
<link href="https://hdl.handle.net/1721.1/162734" rel="alternate"/>
<author>
<name>Lam, Jordan</name>
</author>
<id>https://hdl.handle.net/1721.1/162734</id>
<updated>2025-09-19T04:49:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Dynamic Scene Editing via Semantically Trained 3D&#13;
Guassians
Lam, Jordan
Image-based 3D scene reconstruction continues to be a challenge as it involves solving both the sufficient 3D representation problem and the 3D reconstruction itself. One approach to tackle the rendering problem is 3D Gaussian Splatting because of its potential to produce fast and realistic renders via 3D Gaussian representation. With many applications in the entertainment industry, there is motivation in using 3D Gaussian Splatting for not only reconstructing 3D dynamic scenes but also editing them. However, extending the problem to dynamic 3D scenes proves to be a challenging task as it involves discerning the correct representation of a 3D scene while maintaining the capability to render in real time. State-ofthe-art methods have proposed methods that reconstruct dynamic scenes or edit static scenes, but the problem of editing dynamic scenes is still underexplored. This thesis analyzes the feasibility of editing semantically trained Gaussians for dynamic 3D scene editing. By training 3D Gaussians to represent the semantics across the time steps of a dynamic 3D scene, these primitives can be combined with an image editing pipeline to perform real-time, realistic 3D scene editing. Results show that editing segmented 3D Gaussians produces higher-quality and efficient renders as compared to editing without segmentation. However, when evaluated for mainstream applications, results show the impracticality of this pipeline and draw focus to memory and editing limitations that need to be further researched for future advances in 3D Gaussian Splatting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized AI for Methylation Data with Applications to&#13;
Precision Health</title>
<link href="https://hdl.handle.net/1721.1/162733" rel="alternate"/>
<author>
<name>Jamee, Mehrab S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162733</id>
<updated>2025-09-19T04:49:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralized AI for Methylation Data with Applications to&#13;
Precision Health
Jamee, Mehrab S.
Advances in precision health rely on integrating large-scale genomic data to identify biomarkers and predict health outcomes. However, sharing sensitive patient data between institutions like hospitals poses significant privacy and security challenges, limiting collaboration and the development of robust machine learning models. This thesis proposes a decentralized artificial intelligence framework for analyzing DNA methylation data, enabling institutions to collaboratively train models without exchanging sensitive information. By taking advantage of generative deep learning techniques and federated learning paradigms, the framework aims to impute missing biomarkers in fragmented datasets and improve the accuracy of downstream predictive tasks, like predicting chronological age, mortality, and cancer data. Two intermediate models are implemented and evaluated in this thesis. The first predicts age from DNA methylation data, and can be used for evaluation of the imputation model. The second is an imputation model that uses a conditional autoencoder architecture to reconstruct missing biomarker data in clinical datasets, which is designed to take advantage of contextual methylation embeddings, made available by recently published pretrained epigenomics foundation models. This work seeks to advance the use of decentralized AI in epigenomics, with the ultimate goal of improving personalized healthcare while preserving patient privacy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Smallholder Field Delineation</title>
<link href="https://hdl.handle.net/1721.1/162732" rel="alternate"/>
<author>
<name>Janjigian, Lily T.</name>
</author>
<id>https://hdl.handle.net/1721.1/162732</id>
<updated>2025-09-19T04:49:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploring Smallholder Field Delineation
Janjigian, Lily T.
Accurate crop field delineation from satellite imagery is a critical component of agricultural monitoring. However, most existing models are developed and evaluated in large-scale, industrial agricultural regions, where field boundaries are relatively regular and high-quality annotated data is more readily available. In contrast, smallholder regions—where fields are smaller, more irregularly shaped, and often lack precise geospatial labels—remain underrepresented in both data and model performance. This thesis investigates model architectures, loss functions, and learning paradigms for improving segmentation performance in smallholder settings. Using datasets from Austria, India, and Rwanda, we evaluate several model configurations including ResUNet++ with Dice+BCE and Tanimoto+BCE losses, a meta-learned ResUNet++ using Model-Agnostic Meta-Learning (MAML), and SAM2 ViT-H, a large vision transformer released by Meta, evaluated in a zero-shot setting. We introduce a data processing pipeline that converts vector field boundaries from the FTW dataset into highresolution image–mask pairs suitable for supervised learning. Quantitative and qualitative results reveal that models trained on industrial-scale data perform poorly in smallholder regions without adaptation. SAM2 exhibits strong zero-shot performance, especially on larger fields, while ResUNet++ models trained directly on India perform more consistently across small-to-medium sized fields. MAML yielded underwhelming performance under resource constraints, highlighting the need for further tuning. These findings underscore the importance of geographically diverse, well-aligned training data and support the case for developing globally representative agricultural segmentation datasets.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks</title>
<link href="https://hdl.handle.net/1721.1/162731" rel="alternate"/>
<author>
<name>Jones, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162731</id>
<updated>2025-09-19T04:49:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">You Only Look Twice: An Ensemble Deep Learning Model&#13;
for Wildfire Detection Using Terrestrial Camera Networks
Jones, John M.
Wildfires represent a growing global threat that requires rapid detection and response to minimize environmental damage, economic losses, and human casualties. In the United States, California stands out as a particularly common wildfire hot spot. Recent fire seasons have shattered historical records and been particularly devastating. This work investigates innovative methods for classifying and localizing wildfires through terrestrial cameras positioned on elevated terrain, aimed at improving early detection capabilities and response times while maintaining computational efficiency and reliability for the U.S. Space Force in Southern California. We present YOL2, a novel ensemble approach that combines a fine-tuned ConvNeXt Convolutional Neural Network incorporating a Dynamic Tanh normalization layer with a fine-tuned YOLO11 model for precise localization. Using a comprehensive dataset of 33,636 time-sequenced images from terrestrial cameras across the United States and Europe, our system achieves 98% fire detection accuracy and 55% localization mean average precision [50:95]. The implementation of Dynamic Tanh normalization—applied for the first time in wildfire detection—enhances computational efficiency without sacrificing performance. The images used capture the spread of incipient fires over time, with most containing bounding boxes denoting the approximate location of fire, allowing our system to identify fires quickly while minimizing false positives. Importantly, our spatiotemporal system operates effectively without requiring individual models to rely on multiple time steps as input, enabling modular component replacement and adaptation. The use of pan, tilt, and zoom cameras in concert with our YOLO model provides a more computationally efficient confirmation of fire than alternative methods, showing that extracting better results from less information is possible. Beyond wildfire applications, the YOL2 ensemble methodology demonstrates profound implications for remote sensing more broadly. This work establishes a foundation for highly efficient visual detection systems applicable across numerous domains requiring rapid and accurate object identification and localization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards transparent representations: on internal structure and external world modeling in LLMs</title>
<link href="https://hdl.handle.net/1721.1/162730" rel="alternate"/>
<author>
<name>Hariharan, Kaivalya</name>
</author>
<id>https://hdl.handle.net/1721.1/162730</id>
<updated>2025-09-19T04:49:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards transparent representations: on internal structure and external world modeling in LLMs
Hariharan, Kaivalya
Large language models (LLMs) generalize far beyond their training distribution, enabling impressive downstream performance in domains vastly different from their pretraining distribution. In this thesis, we develop a data-centric view on machine learning. We suggest that the deep generalization of LLMs is best understood through studying the relationships between the four fundamental components of this data generalization: pretraining data, test-time inputs, model outputs, and internal structure. Of these, we present two full research studies characterizing test-time inputs and internal structure. Chapter 1 develops the data-centric view of machine learning, and outline the thesis. Chapter 2 presents Breakpoint, a method of generating difficult coding tasks for models at a large scale that attempts to disambiguate the factors that make problems at test-time difficult. Chapter 3 analyzes the structure of gradient-based jailbreaks in LLMs. We argue that even though GBJs are more out of distribution than even random text, they induce a low-rank, structured change in models. Finally, Chapter 4 discusses the recent rise of reasoning models and proposing some lines of future work in the data-centric view towards developing more robust understanding of LLMs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools</title>
<link href="https://hdl.handle.net/1721.1/162729" rel="alternate"/>
<author>
<name>Hong, Stephen S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162729</id>
<updated>2025-09-19T04:49:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biomechanical Validation of Skeletal Tracking Data and Developing Action Recognition Models for Basketball: A Baseline for NBA Officiating Tools
Hong, Stephen S.
Optical tracking technology in sports has advanced rapidly in recent years, enabling new opportunities for data-driven analysis and tools to enhance the game. This study presents a framework for processing and analyzing a new skeletal tracking dataset collected from NBA basketball games. The methodology includes biomechanical joint validation, anomaly detection, and region-based consistency analysis to assess the integrity of player motion data. Joint movement anomalies are used to detect tracking errors, while court region and stadium-level evaluations help identify where the optical tracking system may be underperforming. These patterns can guide data providers toward specific areas that require refinement, offering a clearer starting point for improving system accuracy. After cleaning the dataset of 117 NBA games, two action recognition models—a transformer-based model and a temporal graph neural network—are implemented to classify player actions, specifically dribbling, passing, shooting, and rebounding, from sequences of skeletal tracking frames. The objective is to establish a baseline for developing tools to support officiating decisions in the NBA. By leveraging spatiotemporal representations of joint motion, this work improves the reliability of skeletal tracking data and contributes to the advancement of automated decision support in professional sports officiating.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training</title>
<link href="https://hdl.handle.net/1721.1/162728" rel="alternate"/>
<author>
<name>Erives, Ezra</name>
</author>
<id>https://hdl.handle.net/1721.1/162728</id>
<updated>2025-09-19T04:49:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Enhanced Proposals for PINN-Based Neural&#13;
Sampler Training
Erives, Ezra
Sampling from distributions whose density is known up to a normalizing constant is an important problem with a wide range of applications including Bayesian posterior inference, statistical physics, and structural biology. Annealing-based neural samplers seek to amortize sampling from unnormalized distributions by training neural networks to transport a family of densities interpolating from source to target. A crucial design choice in the training phase of such samplers is the proposal distribution by which locations are generated at which to evaluate the loss. Previous work has obtained such a proposal distribution by combining a partially learned vector field with annealed Langevin dynamics. However, isolated modes and other pathological properties of the annealing path imply that such proposals achieve insufficient exploration and thereby lower performance post training. In this work we extend existing work and characterize new families of proposals based on controlled Langevin dynamics. In particular, we propose continuously tempered diffusion samplers, which leverage exploration techniques developed in the context of molecular dynamics to improve proposal distributions. Specifically, a family of distributions across different temperatures is introduced to lower energy barriers at higher temperatures and drive exploration at the lower temperature of interest. We additionally explore proposals based on Langevin dynamics involving non-Newtonian kinetic energies. We empirically validate improved sampler performance driven by extended exploration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web</title>
<link href="https://hdl.handle.net/1721.1/162727" rel="alternate"/>
<author>
<name>Luchko, Yaro</name>
</author>
<id>https://hdl.handle.net/1721.1/162727</id>
<updated>2025-09-19T04:49:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Sketch to Stage: Tools for Prototyping and Exporting&#13;
Collaborative DMIs on the Web
Luchko, Yaro
This thesis presents tools and ideas for prototyping and exporting collaborative digital music instruments (DMIs) on the web, the primary purpose of which is to lower the barrier to making music and to enable easier collaboration. This is done in the context of the Creativitas website, which has become a tool of the MIT 21M.080 "Introduction to Music Technology" class to learn about music technology and audio on the web, and a tool for FaMLE (the Fabulous MIT Laptop Ensemble) to use in live performances. The website allows creators to execute code within an editor code box and partake in a practice known as live coding, ultimately creating both sound and visuals. Audio is primarily created with the Tone.js interactive web audio framework, and visuals are drawn on a provided canvas using p5.js. This thesis extends the Creativitas website by providing functionality for exporting the written code as a standalone website. The exported standalone websites serve as DMIs, with standard controls such as volume, tempo, and start and stop buttons. Furthermore, we discuss and implement strategies for synchronizing timing and instrument values. This includes state-of-the-art strategies, as well as ideas for creating extendable interfaces that can include more strategies as they are developed. We end with two examples of exported DMIs, which can be effectively used in performances.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning</title>
<link href="https://hdl.handle.net/1721.1/162726" rel="alternate"/>
<author>
<name>Lei, Si Liang</name>
</author>
<id>https://hdl.handle.net/1721.1/162726</id>
<updated>2025-09-19T04:49:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Programmable Expressiveness in Non-Social Tasks: A Mixed-Methods Study of Middle School AI Learning
Lei, Si Liang
Background. Programmable expressive features—such as speech, facial expressions, and chatbot-style dialogue—are often promoted as tools to enhance engagement in educational robotics. While prior research shows benefits in socially-oriented tasks like storytelling or group collaboration, it remains unclear how student-controlled expressive blocks affect learning when the task itself is non-social. This study isolates the impact of such features in a context where expressiveness is not instructionally required. Method. We conducted a controlled, two-cohort study with 41 middle school students (ages 10–12) during a one-day AI-and-robotics workshop using the Doodlebot platform. Students in the experimental group had access to optional blocks enabling the robot to speak, emote, and use GPT-based responses. These features were hidden from the control group. All participants completed identical programming tasks (e.g., maze navigation, visual classification) that did not require social interaction. Data sources included pre/post surveys, facilitator notes, and student code. We applied the Mann–Whitney U test [1, 2] and reflexive thematic analysis [3, 4] to examine outcomes. Results. The expressive condition showed no significant gains in programming confidence or peer trust, but performed significantly worse on the post-workshop concept quiz (p = .007, r = .41). Qualitative data revealed that students in this group often used expressive blocks for entertainment rather than learning, leading to distraction, off-task behavior, and increased reliance on adult facilitation. Contributions. This study contributes (i) empirical evidence on the limitations of robot expressiveness in non-social learning contexts, (ii) a mixed-methods protocol for analyzing classroom robot deployments, and (iii) design guidance for aligning robot behavior with pedagogical intent. Implications. Expressiveness in educational robots should be contextually deployed—not assumed beneficial by default. In technical, goal-driven tasks that do not involve social reasoning, unscaffolded expressiveness may introduce cognitive overhead or divert attention. We propose a “dial-a-sociality” model, where robot behavior can be flexibly tuned to match the demands of the learning environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar</title>
<link href="https://hdl.handle.net/1721.1/162724" rel="alternate"/>
<author>
<name>Kuka, Adrian</name>
</author>
<id>https://hdl.handle.net/1721.1/162724</id>
<updated>2025-09-19T04:49:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-Speed Simulator for Millimeter-Wave Synthetic Aperture Radar
Kuka, Adrian
The past few years have witnessed growing interest in using millimeter-wave signals for non-line-of-sight (NLOS) perception tasks, with applications in robotics, augmented reality, and smart-homes. However, existing systems suffer from a lack of large mmWave datasets, resulting in limited accuracy and generalizability compared to their line-of-sight, camera-based counterparts. We present the design, implementation, and evaluation of mmSim, a new, high-speed millimeter-wave (mmWave) simulator capable of producing large synthetic datasets to help drive the field of mmWave-based NLOS perception. mmSim introduces two main contributions to improve the speed over existing mmWave simulators. First, it pre-selects areas of the object, which will produce reflections towards each simulated antenna location, allowing it to minimize future computation. Second, it introduces a coarse-to-fine approach to allow early, less critical steps to operate at lower resolutions, while maintaining the high resolution in later steps required for high-accuracy images. These techniques, combined with other performance optimizations, allow mmSim to achieve an over 24x improvement in speed over state-of-the-art mmWave simulators.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards AI Safety via Interpretability and Oversight</title>
<link href="https://hdl.handle.net/1721.1/162723" rel="alternate"/>
<author>
<name>Kantamneni, Subhash</name>
</author>
<id>https://hdl.handle.net/1721.1/162723</id>
<updated>2025-09-19T04:49:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards AI Safety via Interpretability and Oversight
Kantamneni, Subhash
In this thesis, we advance AI safety through mechanistic interpretability and oversight methodologies across three key areas: mathematical reasoning in large language models (LLMs), the validity of sparse autoencoders, and scalable oversight. First, we reverse-engineer addition within mid-sized LLMs and discover that LLMs represent numbers as helices. We demonstrate that LLMs perform addition via the manipulation of these helices using a "Clock" algorithm, providing the first representation-level explanation of mathematical reasoning in LLMs, verified through causal interventions on model activations. Next, we rigorously evaluate sparse autoencoders (SAEs), a popular interpretability tool, by testing their effectiveness on the downstream task of probing. We test SAEs under challenging probing conditions, including data scarcity, class imbalance, label noise, and covariate shift. While SAEs occasionally outperform baseline methods, they fail to consistently enhance task performance, underscoring a potentially critical limitation of SAEs. Lastly, we introduce a quantitative framework to evaluate scalable oversight - a promising idea where weaker AI systems supervise stronger ones - as a function of model intelligence. Applying our framework to four oversight games ("Mafia," "Debate," "Backdoor Code," and "Wargames"), we identify clear scaling patterns and extend our findings through a theoretical analysis of Nested Scalable Oversight (NSO), deriving conditions for optimal oversight structures. Together, these studies advance our understanding of AI interpretability and alignment, providing insights and frameworks to progress AI safety.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metagradient Descent: Differentiating Large-Scale Training</title>
<link href="https://hdl.handle.net/1721.1/162722" rel="alternate"/>
<author>
<name>Chen, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162722</id>
<updated>2025-09-19T04:49:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metagradient Descent: Differentiating Large-Scale Training
Chen, Benjamin
A major challenge in training large-scale machine learning models is configuring the training process to maximize model performance, i.e., finding the best training setup from a vast design space. In this work, we unlock a gradient-based approach to this problem. We first introduce an algorithm for efficiently calculating metagradients -- gradients through model training -- at scale. We then introduce a "smooth model training" framework that enables effective optimization using metagradients. With metagradient descent (MGD), we greatly improve on existing dataset selection methods, outperform accuracy-degrading data poisoning attacks by an order of magnitude, and automatically find competitive learning rate schedules.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A simplified approach to calculating personalized estimates for electric vehicle charging delays</title>
<link href="https://hdl.handle.net/1721.1/162721" rel="alternate"/>
<author>
<name>Chen, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/162721</id>
<updated>2025-09-19T04:49:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A simplified approach to calculating personalized estimates for electric vehicle charging delays
Chen, Helen
In the past decade, electric vehicles (EVs) have gained traction as a cleaner alternative to internal combustion engine vehicles, commonly referred to as gas-powered vehicles. To promote EV adoption, the government has implemented various regulations and incentives to support the transition to cleaner transportation. However, EV adoption in the United States has progressed more slowly than expected, with EVs accounting for less than 10 percent of new vehicle sales in 2023. Recent surveys indicate that a significant barrier is the perceived inconvenience and uncertainty surrounding EV charging, particularly the additional time required to charge during active use, which we call charging delay. Currently, there exist some models for estimating these charging delays, but these models require users to input a significant amount of information, such as their daily driving schedules, locations of charging stations, and exact distances of trips taken each year, which many users may not even remember. These more complex models are likely to overwhelm users, especially those who may be entirely new to EVs. To fill this gap, this thesis introduces a simplified model for estimating personalized annual EV charging delay using a set of easy-to-provide inputs, including typical driving behavior and access to home and work charging. The model logic captures delay from both routine usage, such as weekly driving patterns or typical trips, and occasional, high-energy long-distance trips, which, while not routine, are still important to account for. For weekly trips, the model considers four scenarios based on combinations of home and work charging access to determine driving and charging schedules. For long-distance travel, the model uses data from the 2022 National Household Travel Survey (NHTS) and performs multiple iterations of bootstrap resampling to create synthetic distributions of long-distance trips within a year. Data related to individual routine vehicle usage and charging delay is unavailable, so we are unable to validate the model’s performance through accuracy calculations. Instead, we performed a one-at-a-time sensitivity analysis to better understand how charging delay is affected by different factors. We found that access to private charging, such as home or work charging, improves charging delay robustness for regular weekly trips, with the exception that relying solely on work charging on workdays can cause stepwise increases in non-workday delays. Additionally, long-distance trip delays are no affected by private charging access and follow a stepwise pattern based on vehicle range. In general, the simplified approach presented in this thesis offers a more accessible way for current and prospective EV owners to clearly understand their own expected experience of EV ownership.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards</title>
<link href="https://hdl.handle.net/1721.1/162720" rel="alternate"/>
<author>
<name>Li, Zhening</name>
</author>
<id>https://hdl.handle.net/1721.1/162720</id>
<updated>2025-12-05T17:48:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Limits of Temporal Abstractions for Reinforcement Learning with Sparse Rewards
Li, Zhening
Skills are temporal abstractions that are intended to improve reinforcement learning (RL) performance through hierarchical RL. Despite our intuition about the properties of an environment that make skills useful, there has been little theoretical work aimed to characterize these properties precisely. This work studies the utility of skills in sparse-reward environments with a discrete state space and finite action space. We show, both theoretically and empirically, that RL performance gains from skills are worse in environments where successful trajectories are less compressible. In environments with a highly incompressible distribution of successful trajectories, using unexpressive skills such as macroactions will provably worsen RL performance. We hope our findings can guide research on automatic skill discovery and help RL practitioners better decide when and how to use skills.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Supervised ECG Learning for Multimodal Clinical Tasks</title>
<link href="https://hdl.handle.net/1721.1/162719" rel="alternate"/>
<author>
<name>Chen, Peilin</name>
</author>
<id>https://hdl.handle.net/1721.1/162719</id>
<updated>2025-09-19T04:49:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Self-Supervised ECG Learning for Multimodal Clinical Tasks
Chen, Peilin
We present a multimodal clinical AI framework that integrates time series, images, and text to support robust diagnostic reasoning across diverse input combinations. We first introduce ECG-JEPA, a self-supervised encoder pretrained on multiple ECG datasets to learn generalizable time series representations. This unimodal pretraining improves ECG classification, achieving a 23-point AUC gain on the underrepresented Ga dataset. We then align and fuse these ECG embeddings with chest X-rays and EHR text using a vision–language model backbone, enabling end-to-end multimodal inference. Our results show that incorporating ECG signals meaningfully improves diagnostic performance, highlighting the value of multitask time series pretraining and modular fusion for clinical AI.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)</title>
<link href="https://hdl.handle.net/1721.1/162718" rel="alternate"/>
<author>
<name>Huang, Roderick W.</name>
</author>
<id>https://hdl.handle.net/1721.1/162718</id>
<updated>2025-09-19T04:49:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Hierarchical Approach to Quantitative Portfolio Optimization for Technology Development Project Portfolios (OPTIM-H)
Huang, Roderick W.
The use of Mean-Variance Portfolio Optimization (MVO) in Modern Portfolio Theory (MPT) has been a long-standing method to guide investment decisions for market-traded assets like stocks and bonds. Recent research shows that portfolio optimization developed using MPT could prove useful in investment decisions for technology projects. Traditionally, empirical data from past projects and statistically driven technology trends are used to predict the risk-return model necessary for MPT. This thesis introduces a new methodology, Optimizing Portfolios in Technologies Investments Methodology with Hierarchy (OPTIM-H), which extends MPT to make investment decisions within a hierarchical organizational structure of technology projects. An integrated dataset was developed to demonstrate this methodology, combining 19,000 data records from Techport and Small Business Innovation Research (SBIR) datasets. The dataset captures investment trends and maturity pathways across 17 taxonomy areas, revealing that most projects begin at Technology Readiness Levels (TRLs) 2–4, with average funding amounts near \$300,000. OPTIM-H effectively distinguishes between broader technology groups and their subcategories, showing the impact of community interest on investment decisions. Furthermore, this work investigates k-means clustering as a tool for classifying technology projects for targeted investment, with the analysis identifying seven clusters and achieving a mean utility score of 0.595 with a standard deviation of 0.651.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Canvas with a Large-Scale Social Annotation Platform</title>
<link href="https://hdl.handle.net/1721.1/162717" rel="alternate"/>
<author>
<name>Heiberger, Henry R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162717</id>
<updated>2025-09-19T04:49:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Canvas with a Large-Scale Social Annotation Platform
Heiberger, Henry R.
The last decade has seen a growing interest in the use of collaborative annotation systems, educational tools that allow multiple users to asynchronously comment, highlight, and discuss digital content directly on the source material, transforming traditional classroom readings into a more engaging group activity. Originally developed by MIT CSAIL’s Haystack Group in 2012 under the direction of Professor David Karger, Nota Bene (NB) is a particular collaborative annotation tool that allows students to have annotated online discussions in the margins of textbooks, papers, and even webpages [1]. Though various studies have already proven its ability to succeed in a classroom setting, conversations with key stakeholders have revealed that the tool is missing a key feature found in many other popular collaborative annotation solutions: integration with the Canvas learning management system (LMS) [1–3]. Thus, this work sought to integrate the classroom management features that Canvas provides into the NB platform by supporting Canvas account linking, class importation and roster synchronization, and automatic grade uploading. By doing this, we hoped to improve NB’s quality as a classroom tool, enhancing its value to institutions, encouraging its wider adoption across the academic landscape, and aligning with a much broader trend of creating more integrated, efficient, and user-friendly educational technology solutions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples</title>
<link href="https://hdl.handle.net/1721.1/162716" rel="alternate"/>
<author>
<name>Hernandez, Adriano</name>
</author>
<id>https://hdl.handle.net/1721.1/162716</id>
<updated>2025-09-19T04:49:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Passive-Scoping as a method for Large Language&#13;
Model Robustness to Jailbreaks and Adversarial Examples
Hernandez, Adriano
Artificial Intelligence (AI) and large language models (LLMs) not only present a challenge for adversarial robustness, but also the natural emergence of unwanted capabilities. Current approaches to safeguarding AI and LLMs predominantly rely on explicitly restricting known instances of these. However, this places a burden on model developers, because they cannot anticipate all the potential attacks and undesirable capabilities. To solve this problem, we leverage interdisciplinary knowledge. In the field of information security, the principle of least privilege provides guidance on how to defend from unknown threats. In AI, the principle could be implemented by ensuring that developers specify the knowledge and capabilities an AI system should retain, restricting all others by default. We call this application of the principle of least privilege, passive scoping. Our thesis makes two claims: &#13;
1. We argue that (a) passive scoping mitigates concerns about adversarial robustness and loss of control of AI systems and (b) passive scoping to edit the weights and activations at post-training time is underexplored by the literature. &#13;
2. Of possible approaches, our sparse autoencoder (SAE) filters can implement this underexplored type of passive scoping. They increase safety relative to LoRA finetuning and prompt engineering, but leave room for improvements. &#13;
The thesis is structured as follows: &#13;
1. Chapter 2 elucidates the challenges with adversarial robustness and loss of control risk. Chapter 3 puts forward a conceptual argument for the benefits of passive scoping. Later, it analyzes the extent to which passive scoping has been attempted. These two chapters work together to defend claims 1a and 1b. &#13;
2. Chapter 4 defines our optimization problem. Chapter 5 defines our experimental methodology and metrics. These two define our success criteria for claim 2. Chapter 6 finalizes our defense of claim 2 based on our results. &#13;
3. Chapter 7 explores related work, Chapter 8 engages in a broader discussion, and chapter 9 summarizes the contributions of this thesis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch</title>
<link href="https://hdl.handle.net/1721.1/162715" rel="alternate"/>
<author>
<name>Huang, Alexis</name>
</author>
<id>https://hdl.handle.net/1721.1/162715</id>
<updated>2025-09-19T04:49:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explorations in AI and Creative Learning New Tools to Expand How Young People Imagine, Create, and Tinker with Scratch
Huang, Alexis
As generative AI tools become increasingly prevalent in young people’s lives, these technologies have a growing influence over the way that children learn. While much of the early work at the intersection of AI and education has focused on the development of intelligent tutoring systems designed to deliver content more efficiently, this thesis explores how generative AI might be used to support the creative learning process by sparking curiosity, encouraging exploration, and helping young people express themselves creatively. In this thesis, I explore ways of integrating generative AI with Scratch, the world's largest programming community for children, while remaining aligned with the core values of Scratch: creativity, playfulness, and self-expression. I designed three tools that extend the Scratch ecosystem: Scratch Connect, which explores using generative AI to help Scratchers discover projects that inspire them to create while opening the black box of recommendation systems; scrAItch, which investigates how people can iterate with generative AI by using text-based inputs to create and tinker with Scratch projects; and Scratch Spark, which reimagines the new learner experience by using generative AI to help users create personally meaningful “spark projects.” This thesis describes the process of imagining, creating, and reflecting on these tools, including many of the challenges and tensions that we encountered along the way. I discuss observations and feedback from creative workshops with young people, and conclude by reflecting on open questions and opportunities for future work in designing generative AI tools that support creative learning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators</title>
<link href="https://hdl.handle.net/1721.1/162714" rel="alternate"/>
<author>
<name>Forsythe, Eyan</name>
</author>
<id>https://hdl.handle.net/1721.1/162714</id>
<updated>2025-09-19T04:49:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Effects of Hardware Design Choices on Neural Network&#13;
Accuracy in Analog Inference Accelerators
Forsythe, Eyan
Analog accelerators can enable energy-efficient and high-throughput deep neural network (DNN) computations by computing in memory. Unfortunately, device and circuit nonidealities in these accelerators, such as noise and quantization, can also lead to low DNN inference accuracy due to computation errors arising from these non-idealities. These errors are largely a function of both the choice of DNN workload and different hardware design choices, such as circuit topology and DNN operand encoding. Different hardware design choices can affect the energy, throughput, and area of the system, so it is important to understand how these design choices interact with DNN inference accuracy. However, there is a lack of a systemic understanding of how each of these hardware design decisions affects accuracy and how they interact with other design decisions. To address these issues, we model how hardware design choices can lead to analog errors such as noise and quantization. Then, we explore these errors affect inference accuracy in analog accelerators and how tradeoffs can be made between inference accuracy, energy efficiency, area, and throughput. We find that analog errors generated from hardware design decisions can generate different amounts of accuracy loss depending on which layer in a DNN is subject to these analog errors. This leads to the structure of the DNN having a significant impact in how hardware design choices affect DNN inference accuracy, especially with respect to the individual layers of a DNN. We use knowledge of the relationships between device and circuit non-idealities to improve the accuracy of published analog accelerators and analyze the energy and area costs of the increased accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications</title>
<link href="https://hdl.handle.net/1721.1/162713" rel="alternate"/>
<author>
<name>Guobadia, Omozusi E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162713</id>
<updated>2025-09-19T04:49:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design of High-Resolution SAR ADC for Detection of&#13;
Sub-Cortical Neuron Action Potentials for BMI&#13;
Applications
Guobadia, Omozusi E.
The advancement of brain-machine interfaces (BMIs) requires neural signal acquisition systems that are capable of resolving both fast, low-amplitude action potentials (APs) and slow, higher-amplitude local field potentials (LFPs) under stringent power and area constraints. This thesis presents the design and simulation of a high-resolution, low-power successive approximation register (SAR) analog-to-digital converter (ADC) tailored for sub-cortical neural signal detection. To optimize dynamic range and reduce power consumption, a novel adaptive zoom-and-tracking architecture is introduced, enabling the ADC to dynamically adjust its reference window based on LFP trends while maintaining high-resolution capture of APs. The proposed system integrates a bootstrapped track-and-hold circuit, a differential capacitive DAC, and a strong-arm comparator in the analog front-end, alongside a digital FIR filter and SAR logic with zoom-range control in the digital domain. Simulations validate the functionality of each subsystem independently and in concert, demonstrating the system’s ability to dynamically isolate APs from LFP-dominated baselines while reducing analog power draw by over 60% compared to fixed-range ADCs. This work offers a promising approach for scalable, energy-efficient neural recording architectures suited to future BMI applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography</title>
<link href="https://hdl.handle.net/1721.1/162712" rel="alternate"/>
<author>
<name>Gupta, Shreya</name>
</author>
<id>https://hdl.handle.net/1721.1/162712</id>
<updated>2025-09-19T04:49:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transformer-Based Prediction of Coronary Artery Lumen&#13;
Expansion Post Angioplasty Using Optical Coherence&#13;
Tomography
Gupta, Shreya
Coronary artery disease is the leading cause of mortality globally, resulting in an urgent and critical need to better understand both vessel morphology and the processes of intervention. Angioplasty is an intervention which causes a previously constricted vessel to expand via placement of a stent, and is affected by numerous characteristics of the vessel such as calcium eccentricity and size, wall thickness, and prior lumen size. Being able to accurately assess whether a stent will properly expand allows cardiologists to pursue pre-stenting calcium lesion modification strategies that help avoid dangerous complications of improper stenting. This work introduces a pipeline for post-stenting lumen area prediction from pre-stenting optical coherence tomography (OCT) images. This pipeline includes morphological correction of OCT image segmentations, explainable feature extraction from OCT segmentations, and a predictive transformer network that combines morphological features with injected stent information. The aim is for such a pipeline to be used to support clinical decision making.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/162711" rel="alternate"/>
<author>
<name>Fu, Evelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/162711</id>
<updated>2025-09-19T04:49:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Complete Visual and Geometric Object Reconstruction&#13;
via Autonomous Robotic Manipulation
Fu, Evelyn
Accurately simulating object dynamics based on real-world perception inputs has wide applications in digital twins and robotic manipulation. Yet, doing so requires practitioners to carefully measure and reconstruct the dynamic and geometric properties of the objects, which is time-consuming and requires domain expertise. This project proposes an automatic pipeline to construct 3D representations from a collection of real objects, which can further be used to generate assets with accurate visual texture and collision geometry to be used in simulation. This pipeline will be designed to have minimal hardware requirements and aim to be efficient in time for physical actuation to maximize data collection on minimal hardware.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-based Planning for Efficient Task Execution</title>
<link href="https://hdl.handle.net/1721.1/162710" rel="alternate"/>
<author>
<name>Ding, Wenqi</name>
</author>
<id>https://hdl.handle.net/1721.1/162710</id>
<updated>2025-09-19T04:49:30Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Model-based Planning for Efficient Task Execution
Ding, Wenqi
Robotic agents navigating 3D environments must continuously decide their next moves by reasoning about both visual observations and high-level language instructions. However, they plan in a high-dimensional latent space, opaque to human collaborators. Hence, it is difficult for humans to understand the agent’s decision-making process. This lack of interpretability hinders effective collaboration between humans and robots. The key question we are trying to answer in this thesis is: Can we build a unified planning framework that fuses visual and language into a single, interpretable representation, so that humans can interpret robots’ decisions? We propose a model-based planning framework built around pretrained vision-language models (VLMs). We show that VLMs can be used to plan in a unified embedding space, where visual and language representations can be decoded back to human-interpretable forms. Empirical evaluation on vision-language navigation benchmarks demonstrates both improved sample efficiency and transparent decision making, enabling human-in-the-loop planning and more effective human-robot collaboration.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Global Non-Convex Optimization with Integer Variables</title>
<link href="https://hdl.handle.net/1721.1/162709" rel="alternate"/>
<author>
<name>Kriezis, Demetrios C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162709</id>
<updated>2025-09-19T04:49:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Global Non-Convex Optimization with Integer Variables
Kriezis, Demetrios C.
Non-convex optimization refers to the process of solving problems whose objective or constraints are non-convex. Historically, this type of problems have been very difficult to solve to global optimality, with traditional solvers often relying on approximate solutions. Bertsimas et al. [1] introduce a novel approach for solving continuous non-convex optimization problems to provable optimality, called the Relaxation Perspectification Technique - Branch and Bound (RPT-BB). In this thesis, we extend the RPT-BB approach to the binary, mixed-binary, integer, and mixed-integer variable domains. We outline a novel branch-and-bound algorithm that makes use of the Relaxation Perspectification Technique (RPT), as well as binary, integer, and eigenvector cuts. We demonstrate the performance of this approach on two representative non-convex problems, as well as two real-world non-convex optimization problems, and we benchmark its performance on BARON and SCIP, two state-of-the-art optimization solvers for non-convex mixed-integer problems. We observe that our algorithm, despite being more general, is able to outperform the state-of-the-art solvers on many problem instances.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest</title>
<link href="https://hdl.handle.net/1721.1/162708" rel="alternate"/>
<author>
<name>Li, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/162708</id>
<updated>2025-09-19T04:49:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing AI Agents for Automated Software&#13;
Engineering with Palimpzest
Li, Jason
The deployment of large language models (LLMs) as autonomous agents is transforming the software development landscape. Increasingly more engineers are using natural language agents to expedite and guide development workflows, while large organizations are investing heavily on building agentic systems for tasks such as code generation and code repair. A key challenge in developing such systems is tuning agent hyperparameters— settings that affect performance such as choice of model, temperature settings, and context window sizes. As system complexity grows, the hyperparameter space expands, complicating optimization under real-world compute and time constraints. In this work, we present Palimpzest[1] as an agentic optimizer able to balance cost and performance objectives by tuning agentic hyperparameters. We demonstrate that Palimpzest can tune our agent hyperparameters at 8.5 times lower cost and with 24 times greater time efficiency compared to the conventional grid search. By integrating our custom-built Debugger and Code Editor Agents as new operators within Palimpzest, we enhance the system’s ability to resolve real-world GitHub issues. And to facilitate hyperparameter selection, we also introduce File Coverage, Report Accuracy, and Patch Similarity along with the traditional SWE-Bench Score as quality evaluation methods used by Palimpzest’s optimization loop. When evaluated on the SWE-Bench Lite[2] benchmark, our optimized system achieves a 15% score at a significantly lower cost compared to previous approaches.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems</title>
<link href="https://hdl.handle.net/1721.1/162707" rel="alternate"/>
<author>
<name>Lau, Mary</name>
</author>
<id>https://hdl.handle.net/1721.1/162707</id>
<updated>2025-09-19T04:49:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integrating Gradient Boosting and Generative Models:&#13;
Hybrid Approach to Address Class Imbalance and&#13;
Evaluation Gaps in Real-World Systems
Lau, Mary
Anomaly detection remains a persistent challenge in machine learning due to the extreme class imbalance, high cost of false negatives, and the need to regulate false positives in realworld settings at scale. This thesis introduces Tail-end FPR Max Recall, a business-aware evaluation framework designed for such constrained environments. Using this framework, we benchmark LightGBM—a gradient boosting method known for its computational efficiency and predictive accuracy—on an imbalanced dataset, comparing its performance against standard academic evaluation criteria. Our results demonstrate that Tail-end FPR Max Recall fills critical gaps left by standard academic criteria, providing a more realistic assessment of model performance that aims to maximize recall while enforcing a false positive rate budget. Beyond benchmarking, we propose two strategies that incorporate deep learning methods to augment the already strong performance of gradient boosting: (1) using generative models to produce synthetic minority-class samples that outperform traditional oversampling techniques, and (2) using neural embeddings to improve feature representation for anomaly detection. Together, these contributions offer a methodology for evaluating and improving anomaly detection pipelines in domains where rare, high-impact events must be detected while meeting strict operational demands.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification</title>
<link href="https://hdl.handle.net/1721.1/162706" rel="alternate"/>
<author>
<name>Kandeh, Stephen</name>
</author>
<id>https://hdl.handle.net/1721.1/162706</id>
<updated>2025-09-19T04:49:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">FPGA Based Data Acquisition System for Cryogenic&#13;
Device Verification
Kandeh, Stephen
In this work, a system of processors connected to an FPGA is interfaced with a custom analog frontend and used to create a verification environment for cryogenic devices. In particular, this thesis focuses on the technical structure of that system. Current validation efforts often rely on commercially available arbitrary waveform generators (AWGs) and oscilloscopes, which, while highly capable, are often prohibitively expensive and poorly suited for large-scale or parallelized testing environments. As noted in industry reports, scaling such instrumentation introduces significant challenges in cost, calibration, and signal synchronization, making them inefficient for high-resolution or high-speed analyses in multi-channel systems [1]. On the other hand, an FPGA provides the necessary performance to increase parallelism without a proportional increase in cost, greatly improving testing resolution and speed. When augmented with a set of processors, we introduce a level of accessibility and automatability not currently present in commercial products. To be clear, while the board was designed with the testing of nanowires in mind (and is not capable of measuring DC voltages), it can still be combined with separate lab equipment to interact with Josephson Junction based devices. That said, the flexibility of this system allows for a generalized application to any electronic that demands a specialized testing procedure involving arbitrary signal processing and generation. The money, time, and energy that this innovation will save on cryogenic electronic validation will significantly improve our progress in developing these technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy Efficient Real-time Operating Systems on Chip</title>
<link href="https://hdl.handle.net/1721.1/162705" rel="alternate"/>
<author>
<name>Kang, Ezra H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162705</id>
<updated>2025-09-19T04:49:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Energy Efficient Real-time Operating Systems on Chip
Kang, Ezra H.
Autonomous micro-robots are crucial for several tasks, such as search and rescue, noknowledge mapping, and navigation. Without an external power connection, these robots are constrained by their on-platform energy capacity. The power consumption of actuation systems used in micro-robots is within the same magnitude of the power consumption of the compute system. Thus, the remaining factor for enabling these micro-robots is associated with the design of energy-efficient compute systems. Energy usage of compute systems is typically dominated by memory operations, which previous efforts have attempted to mitigate with memory efficient software and hardware. These efforts are enabled with the software/hardware interface, which is implemented as an Operating System (OS). However, Operating Systems for energy-efficient platforms have not been fully explored. Current approaches utilize full general-purpose Operating Systems such as Linux, which can incur large memory and compute overhead penalties. These overheads not only consume the typically limited memory resources of energy-efficient systems, but also increase the number of memory accesses and CPU cycles, both of which are significant contributors to energy consumption. To address these concerns, we propose the design of a computational and memory efficient Real-time Operating System (RTOS). Our RTOS is designed to minimize both memory footprint and compute cycle overhead. It achieves this primarily through direct physical memory access, cycle-efficient task scheduling, and minimal runtime services to avoid unnecessary processing. Additionally, the modular RTOS kernel includes only the components required by an application in the final binary, reducing code size and memory usage without compromising functionality. The design enables the utilization of energy-efficient hardware accelerators and software, allowing for execution of robotics workloads with minimal memory and cycle overhead. When comparing robotics algorithms implemented on our proposed RTOS and baseline OSes, our design was able to achieve a 99% reduction in memory footprint. Additionally, it achieved up to a 47% increase in throughput. Thus, our design demonstrates a direct reduction in memory and CPU cycle overhead, which in turn lowers total system memory and energy consumption. The proposed design was demonstrated and verified on a resource constrained system-on-chip on the AMD Virtex Ultrascale+ VCU118 FPGA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History</title>
<link href="https://hdl.handle.net/1721.1/162704" rel="alternate"/>
<author>
<name>Lu, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/162704</id>
<updated>2025-09-19T04:49:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Uncertainty and Generality of Transfer Learning Models&#13;
in Predicting Signaling History
Lu, Claire
Proper cell-cell communication is essential for multicellular development, from embryogenesis to stem cell differentiation. To map these networks, we developed IRIS (Intracellular Response to Infer Signaling state), a semi-supervised deep learning method that fits conditional variational autoencoders (CVAE) to single-cell RNA sequencing (scRNA-seq) data. IRIS is able to annotate cellular signaling states of individual cells using only their gene expression. Currently, IRIS has been validated in developmental contexts, including gastrulation, early endoderm organogenesis, and mesoderm lineages in mouse embryos. However, its predictions often show extremely high or extremely low confidence, suggesting a need for methods to prevent overconfidence and better account for uncertainty. To generalize IRIS to broader cell-cell communication problems, we combined engineering and experimental approaches, integrating uncertainty quantification techniques with new biological datasets. We implemented three approaches for estimating uncertainty in IRIS predictions: stochastic sampling, Monte Carlo dropout, and ensemble prediction. These approaches were evaluated on two new endoderm and mesenchyme combinatorial perturbation screens. Across all methods, uncertainty values reliably reflected the varying difficulty of predicting different signaling pathways, driven by both biological complexity and dataset representation. Moreover, higher uncertainty was consistently associated with lower prediction accuracy, confirming uncertainty as a useful proxy for model confidence. All three methods identified similar high-uncertainty cell populations, supporting their consistency and validity. By incorporating uncertainty quantification into IRIS, we provide more robust and interpretable predictions that can guide future experiments and enhance the model’s applicability across diverse biological contexts.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications</title>
<link href="https://hdl.handle.net/1721.1/162703" rel="alternate"/>
<author>
<name>Le, Khang D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162703</id>
<updated>2025-09-19T04:49:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Core Material Evaluation for Magnetic Energy Harvester&#13;
Applications
Le, Khang D.
Current transformer magnetic energy harvesters (CTMEHs) harvest magnetic energy from an AC current-carrying conductor and convert this energy into usable electrical energy for use by various low-power devices, such as sensors and microcontrollers. The amount of power harvested by CTMEHs is determined by the primary current passing through the conductor; however, variables such as the magnetic core’s dimensions, magnetic properties, and turn count also influence performance. Previous works have focused mainly on analytical or numerical modeling of CTMEH behavior or improving power harvest performance given a specific magnetic core material. Some existing research has compared the effects of different core materials on CTMEH power harvest in limited fashion; but a comprehensive, comparative study of high permeability, high saturation flux density CTMEHs had yet to be explored. This thesis establishes core material as the primary independent variable along with primary current and frequency during testing to isolate the effects of magnetic properties on determining the amount of power a magnetic core can harvest under different current conditions. The thesis concludes that nanocrystalline material excels at lower-current applications, while silicon steel material offers better performance at higher-current applications across all frequencies when used as CTMEHs, offering system designers enticing material choices depending on the nature of the application.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Eliciting Visualization Attitudes with Repertory Grids</title>
<link href="https://hdl.handle.net/1721.1/162702" rel="alternate"/>
<author>
<name>Hua, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/162702</id>
<updated>2025-09-19T04:49:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Eliciting Visualization Attitudes with Repertory Grids
Hua, Dana
Research in public data communication typically focuses on improving the processes of encoding and decoding, answering the question of how to design a visualization to best communicate information to an audience. However, by treating visual communications as simply conduits for information, we ignore an important aspect of how people interact with communications. We ignore the attitudes – the thoughts, feelings, and intentions toward action – a person may form from communicative artifacts based on their personal values and experiences. Recent research has demonstrated that—much like natural language—readers of visualizations make social attributions: inferences about the identities and characteristics of an artifact’s makers, modes of distribution, and tools of production. In this thesis, I contribute a method to systematically map the visualization attitudes of an individual and the associated ideologies of their sociocultural group, by adapting the repertory grid technique from clinical psychology, to the context of data visualization. I demonstrate the effectiveness of this mixed methods approach by eliciting both the attitudes towards a visualization most salient to an individual, and the design features of the visualization that inform each attitude. This method offers a new way of exploring the content and latent structure of visualization attitudes, which opens new avenues for socioculturally-informed and intervention-driven research in data visualization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing scheduling for stream structured programming for StreamIt</title>
<link href="https://hdl.handle.net/1721.1/162701" rel="alternate"/>
<author>
<name>Dow, Nicholas Lee</name>
</author>
<id>https://hdl.handle.net/1721.1/162701</id>
<updated>2025-09-19T04:49:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing scheduling for stream structured programming for StreamIt
Dow, Nicholas Lee
As straightforward increases in performance on general purpose CPUs slow down, the shift to application specific implementations and hardware has accelerated. This shift to towards specialization improves performance but often at the cost of developer productivity in learning these new tools. StreamIt is a Domain Specific Language developed to increase performance of streaming applications while being relatively user-friendly. While designed to be parallelized easily, the scheduling backend of the StreamIt compiler is not adapted to the heterogeneous and distributed nature of new accelerator hardware. This thesis details the design and development of a scheduler interface that enables hardware customized schedulers to be developed quickly. The scheduler interface allows for schedulers to take advantage of the unique compiler optimizations enabled by StreamIt’s structure. Two schedulers, one search based and another heuristic based, are built using this interface to schedule StreamIt workloads to optimize differing metrics such as throughput and latency. Our experiments evaluate the performance of these workloads, and details future direction for expanding the interface and scheduler designs that could take advantage of it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions</title>
<link href="https://hdl.handle.net/1721.1/162700" rel="alternate"/>
<author>
<name>Flynn, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162700</id>
<updated>2025-09-19T04:49:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Mitigating Electromagnetic Interference in Unshielded MRI: Implementation, Experimentation, and Future Directions
Flynn, John M.
Portable, Low-Field MRI broadens access and enables numerous new applications such as point-of-care. Operating outside an RF-shielded room introduces electromagnetic interference (EMI), degrading further the signal-to-noise ratio (SNR) which is already diminished due to the lower magnetic fields used in portable imaging. Existing methods to reduce EMI perform well in simple noise environments, but can struggle with more complex profiles. Relaxing the linear assumptions is hypothesized to bring more robust mitigation algorithms. A system-wide characterization of SNR challenges was carried out on a rebuilt 800G scanner, existing techniques were validated, and new signal processing approaches were explored to drive image quality upwards. Various analytical approaches showed promise, such as dynamic coils/preamps, averaging methods, calibration, and smoothing methods. Groundwork was laid for learning-based methods throughout the pipeline. This work serves as an important baseline for the numerous experiments necessary for the full-system optimization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings</title>
<link href="https://hdl.handle.net/1721.1/162698" rel="alternate"/>
<author>
<name>Goel, Abhinav</name>
</author>
<id>https://hdl.handle.net/1721.1/162698</id>
<updated>2025-12-09T18:18:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Single-Model Any-Subgroup Equivariance via Symmetric Positional Encodings
Goel, Abhinav
The inclusion of symmetries as an inductive bias, known as “equivariance”, often improves generalization on geometric data (e.g. grids, sets, and graphs). However, equivariant architectures are usually highly constrained, designed for pre-chosen symmetries, and cannot be applied to datasets with different symmetries. This work constructs a single model that is simultaneously equivariant to several groups, by simply regulating a certain input feature. Starting with a permutation-equivariant base model respecting the full Sₙ symmetry group, we can obtain subgroup G ⊆ Sₙ equivariance by using a symmetry-breaking input that is G-symmetric. Under mild conditions, the resultant network is only G-equivariant. But finding an input with automorphism group exactly G is computationally hard, which can be overcome by relaxing exact symmetry breaking to approximate symmetry breaking. This is done by leveraging the notion of 2-closure to derive fast algorithms. This method is validated on symmetry selection, multitask, and transfer learning settings, demonstrating that a single network equivariant to multiple permutation subgroups outperforms both separate equivariant models or a single non-equivariant model.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation</title>
<link href="https://hdl.handle.net/1721.1/162697" rel="alternate"/>
<author>
<name>Choi, Sun Mee</name>
</author>
<id>https://hdl.handle.net/1721.1/162697</id>
<updated>2025-09-19T04:49:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Application of Precision Successive-approximation-register&#13;
Analog-to-digital Converters for Digital Root-mean-square&#13;
Calculation
Choi, Sun Mee
The advancement of semiconductor manufacturing processes has allowed for the availability of powerful microcontrollers at lower costs, granting system designers the flexibility to select between analog and digital signal processing techniques. Enabled by recent developments in low-power successive approximation register (SAR) analog-to-digital converter (ADC) technology, a digital approach to root-mean-square (RMS) measurement is proposed. The work begins with an explicit accumulation and averaging approach, and a set of improvements were designed to increase measurement accuracy and reliability. Algorithms are compared using the metrics of error, power efficiency, latency, and digital overhead. High-performing and power-efficient digital RMS measurement methods could be valuable for decentralized instrumentation systems such as smart grids and factory automation where long-lasting handheld and portable solutions are becoming critical.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hosting LLMs on Shared GPUs</title>
<link href="https://hdl.handle.net/1721.1/162696" rel="alternate"/>
<author>
<name>Choi, Kenneth K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162696</id>
<updated>2025-09-19T04:49:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Hosting LLMs on Shared GPUs
Choi, Kenneth K.
Large language models (LLMs) have emerged as powerful tools for a wide array of applications. Serving multiple LLMs on shared GPUs has increasingly gained attention as single providers need to support multiple applications (summarization, chat, code generation), different model versions (A/B testing), and various types of customers. However, multi-model serving is particularly challenging, as static memory partitioning can lead to severe under-utilization, fragmentation, and latency spikes, while dynamic loading of model weights can cause unacceptable downtime due to high model loading overheads. To address these issues, we introduce hierarchical paging, a novel key-value (KV) cache management strategy, and we implement it within the vLLM serving engine. Hierarchical paging organizes GPU memory into a two-level hierarchy: large contiguous memory blocks allocated to individual models, which are then subdivided into smaller blocks that are allocated to different requests issued to that model. Our design enables dynamic memory sharing across models, improving model throughput and overcoming key problems of existing approaches. We detail our implementation and present end-to-end experiments that showcase these throughput improvements under different workloads. We include further evaluations on the runtime overheads of our hierarchical paging implementation, which show that the overheads are insignificant. Most importantly, we demonstrate that hierarchical paging is easy to implement, optimizing for implementation effort and maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation</title>
<link href="https://hdl.handle.net/1721.1/162695" rel="alternate"/>
<author>
<name>Cheng, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162695</id>
<updated>2025-09-19T04:49:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Topology-Guided Diffusion Process for Synthetic Tabular Data Generation
Cheng, Emily
Synthesizing realistic tabular data is crucial for any analytical application, including policy evaluation related to household energy use. However, detailed household-level consumption data, necessary for such evaluation, are scare at fine geographic scales, as public surveys like the U.S. Residential Energy Consumption Survey (RECS) provide too few observations. We address this gap by developing a topology-guided diffusion-based generative model that produces realistic synthetic household data, and our approach handles two key challenges in this setting: (1) mixed continuous and discrete features and (2) strong hierarchical dependencies among variables. To handle categorical features, we build upon recent advancements in discrete diffusion, particularly TabDDPM [1] and TabDiff [2], which discretize the diffusion process through noise transition matrices, effectively extending diffusion methods to discrete tabular domains. To address hierarchical dependence, we include (1) a structure-aware noise schedule that injects noise from the leaves to the root along an approximate Chow–Liu tree constructed from the variables and (ii) a masked self-attention denoiser that aligns with the same graphical structure. Extensive experiments show that our structured diffusion model outperforms the baseline TabDiff on data with tree-like dependencies, due to the inductive bias from our structure-aware noise schedule. On data that only approximately follows a tree, such as the RECS dataset, our model maintains competitive performance, only slightly outperforming standard diffusion methods. These results highlight the potential for future work to further optimize the tradeoff between structural approximation and estimation accuracy and for future work beyond the energy domain.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees</title>
<link href="https://hdl.handle.net/1721.1/162694" rel="alternate"/>
<author>
<name>Gregory, Cale</name>
</author>
<id>https://hdl.handle.net/1721.1/162694</id>
<updated>2025-09-19T04:49:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">On Dynamic Treatment Regimes: Collaborative Search&#13;
and LLM-Driven Decision Trees
Gregory, Cale
This thesis evaluates the validity of current dynamic treatment regime algorithms and presents a novel data structure for extracting treatment decisions from unstructured clinical notes. The main contribution is the Clinical Decision Tree (CDT) which uses large language models (LLMs) to extract key decisions in chronic disease treatment. This addresses the main pain points in dynamic treatment regimes of low interpretability and reliance on poorly collected data for traditional machine learning methods. This work contains extensive experiments on mortality prediction, time series forecasting, and synthetic patient modeling. Experiments show that vital-based representations do not capture enough meaningful data about a patient to accurately predict and evaluate new treatment methods. By utilizing latent embeddings and vector search, experiments show that the collected vitals of patients fail to differentiate the outcomes of the related patients. Conversely, the clinical notes contain complex and substantial information about clinical decision making. LLMs enable the valuable knowledge extraction from unstructured data. Utilizing LLMs, experimental results and expert evaluation indicates that CDTs can extract and distill interpretable treatment decisions. Thus, CDTs are a valuable tool that can be refined to increase confidence in treatment decisions and identifying rare and uncommon medical practices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova</title>
<link href="https://hdl.handle.net/1721.1/162693" rel="alternate"/>
<author>
<name>Han, Aileen</name>
</author>
<id>https://hdl.handle.net/1721.1/162693</id>
<updated>2025-09-19T04:49:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">"Eliminating the Friction": An AI-powered Assistant for StarLogo Nova
Han, Aileen
Agent-based modeling is a technique that allows students to reason about and create models of real-life phenomena. However, the programmatic implementations of this technique, such as StarLogo Nova, often introduce “friction”; students may get stuck on the syntactical details of the implementation before being able to engage in the mechanistic thinking behind their models. In order to shift students’ focus towards the goal of understanding the systems they are building, we set out to create an AI-powered assistant for StarLogo Nova that can explain and debug students’ code. After identifying and experimenting with various parameters of AI models in an attempt to improve their performance, we were able to build the StarLogo Turtle Helper, an easily accessible assistant integrated into the platform that can produce accurate responses to StarLogo-related questions. Through this process, we discovered two key properties of these models: first, the method through which these models use provided documentation (called retrieval-augmented generation, or RAG) is quite rudimentary, so any background knowledge should be included in the prompt or the model’s system instructions instead. Second, these models perform best if they are designed to only serve one purpose, so creating multiple models and chaining them together may be the best way to achieve more complex functionality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease</title>
<link href="https://hdl.handle.net/1721.1/162692" rel="alternate"/>
<author>
<name>Li, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/162692</id>
<updated>2025-09-19T04:49:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predicting Progression of Metabolic Dysfunction-associated Steatotic Liver Disease
Li, Jonathan
This work focuses on the progression from metabolic dysfunction-associated fatty liver to metabolic dysfunction-associated steatohepatitis, a more serious prognosis that can lead to liver failure and death. Additional adverse progressed outcomes include hepatic failure, fibrosis, cirrhosis, and malignant neoplasm of liver and intrahepatic bile ducts. We explore the possibility of using different machine learning techniques, including logistic regression, XGBoost, random forest, and decision trees to predict the likelihood of progression. We use data from Massachusetts General Brigham to train our models, incorporating demographics, physical measurements, lab results, and doctor notes. As a result of this project, we our best model was an XGBoost classifier with an AUROC of 0.800 with random forest at a similar performance of 0.786. However, all of our models had low AUPRC and sensitivity, indicating both overfitting and an imbalanced dataset.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Traceability via OTrace Concepts and Implementation</title>
<link href="https://hdl.handle.net/1721.1/162691" rel="alternate"/>
<author>
<name>Farooq, Ashar</name>
</author>
<id>https://hdl.handle.net/1721.1/162691</id>
<updated>2025-09-19T04:49:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data Traceability via OTrace Concepts and Implementation
Farooq, Ashar
Financial transactions are commonplace in the modern world. Everyday consumers make purchases on many e-commerce sites and often use many third-party financial services, such as to predict your credit score, to obtain customized budget recommendations, and to find out which specific loan is the best for them. These financial services often need financial information from the consumer, which is not always clear to the consumer. In other words, consumer data are being used without their knowledge and consent. The proposed solution of using a traceability protocol called OTrace aims to mitigate this issue of not knowing where a consumer’s data is along with what is being done with it. This paper will aim to bolster OTrace to be more representative of a protocol that consumers can actually use as a service, and financial institutions can have trust that this will solve the problem of consumers not knowing which third-party financial services have their data. In other words, this work will create a more general traceable and accountable data sharing system specification that includes the OTrace layer on top of an OAuth layer that will be complemented with a model deployment example. The addition of more relevant OTrace API endpoints corresponding to a new specification along with an entire new OTrace Web implementation along with analysis will guide the data traceability world, data privacy world, open banking world, financial world and ultimately the global world forward. There will be a model deployment of an OTrace service on top of an OAuth protocol that can allow everyone to see it being used by various parties that can ultimately scale up to fix the problem of unintended data usage and lack of transparency of location of data.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equivariant Autoregressive Models for Molecular Generation</title>
<link href="https://hdl.handle.net/1721.1/162690" rel="alternate"/>
<author>
<name>Kim, Song Eun</name>
</author>
<id>https://hdl.handle.net/1721.1/162690</id>
<updated>2025-09-19T04:48:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Equivariant Autoregressive Models for Molecular Generation
Kim, Song Eun
In-silico generation of diverse molecular structures has emerged as a promising method to navigate the complex chemical landscape, with direct applications to inverse material design and drug discovery. However, 3D molecular structure generation comes with several unique challenges; generated structures must be invariant under rotations and translations in 3D space, and must satisfy basic chemical bonding rules. Recently, E(3)-equivariant neural networks that utilize higher-order rotationally-equivariant features have shown improved performance on a wide range of atomistic tasks, including structure generation. Previously, we have developed Symphony, an E(3)-equivariant autoregressive generative model for 3D structures of small molecules. At each sampling iteration, a single focus atom is selected, which is then used to decide on the next atom’s position within its neighborhood. Symphony built on previous autoregressive models by using message-passing with higher-order equivariant features, allowing a novel representation of probability distributions via spherical harmonic signals. Symphony’s performance approached that of state-of-the-art diffusion models while remaining relatively lightweight. However, it continued to face challenges in error accumulation and determining bond lengths, and it was only evaluated against small organic molecules. Here, we expand on Symphony’s capabilities and make it more compatible with larger atomic structures. We add improvements to the embedders, split the radial and angular components when predicting atom positions, and increase the radial cutoff for atomic neighborhoods considered during prediction. We also increase Symphony’s training and inference speeds through a new implementation in PyTorch, making inference nearly 4x faster than previously. In addition, we demonstrate its effectiveness across a variety of tasks, including small molecule and protein backbone generation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions</title>
<link href="https://hdl.handle.net/1721.1/162689" rel="alternate"/>
<author>
<name>Das, Gaurab</name>
</author>
<id>https://hdl.handle.net/1721.1/162689</id>
<updated>2025-12-09T18:09:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Vigilis: Leveraging Language Models for Fraud Detection in Mobile Communications and Financial Transactions
Das, Gaurab
Although advances in security have strengthened defenses in digital financial systems, attackers increasingly rely on social engineering to achieve their goals. These attacks are difficult to detect and prevent with existing security measures. To address this, we propose Vigilis, a fraud-protected application that employs advanced language models to counter such attacks in calls, texts, and payments. We first collect and make available a corpus of fraudulent calls from the Internet and train lightweight transformer-based models that achieve fraud detection accuracies of up to 94% and 87% on transcript and audio modalities, respectively. We integrate these models into a real-time call system within Vigilis that operates entirely on-device, enabling accurate fraud detection in an efficient and privacy-preserving manner. We then extend Vigilis to incorporate context-aware transaction authentication, where the underlying social context behind a transaction is determined from calls, texts, and browsing history and used to infer the transaction’s validity. By uniquely incorporating social concepts into traditional cybersecurity techniques, we attempt to counter and mitigate issues related to social engineering attacks in financial fraud.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GDSVD: Scalable k-SVD via Gradient Descent</title>
<link href="https://hdl.handle.net/1721.1/162688" rel="alternate"/>
<author>
<name>Gan, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162688</id>
<updated>2025-09-19T04:48:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">GDSVD: Scalable k-SVD via Gradient Descent
Gan, Emily
We show that a gradient-descent with a simple, universal rule for step-size selection provably finds k-SVD, i.e., the k ≥ 1 largest singular values and corresponding vectors, of any matrix, despite nonconvexity. There has been substantial progress towards this in the past few years where existing results are able to establish such guarantees for the exact-parameterized and over-parameterized settings, with choice of oracle-provided step size. But guarantees for generic setting with a step size selection that does not require oracle-provided information has remained a challenge. We overcome this challenge and establish that gradient descent with an appealingly simple adaptive step size (akin to preconditioning) and random initialization enjoys global linear convergence for generic setting. Our convergence analysis reveals that the gradient method has an attracting region, and within this attracting region, the method behaves like Heron’s method (a.k.a. the Babylonian method). Empirically, we validate the theoretical results. The emergence of a modern compute infrastructure for iterative optimization coupled with this work is likely to provide a means of solving k-SVD for very large matrices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology</title>
<link href="https://hdl.handle.net/1721.1/162687" rel="alternate"/>
<author>
<name>Chen, Tina T.</name>
</author>
<id>https://hdl.handle.net/1721.1/162687</id>
<updated>2025-09-19T04:48:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Comparison of dispersion metrics for estimating transcriptional noise in single-cell RNA-seq data and applications to cardiomyocyte biology
Chen, Tina T.
Transcription is a dynamic process with a multitude of characteristics, including transcript level, burst frequency, amplitude, and variability. Single-cell RNA sequencing data analysis often focuses on comparing transcription levels. However, these analyses capture only a portion of the wealth of information conveyed by transcription. The quantification and analysis of transcriptional variability poses an opportunity to study transcription and gene regulation from a new angle. Transcriptional variability has already been implicated in a number of biological processes, including in immune system development and in aging. Yet, the most appropriate method for measuring transcriptional variability in single-cell data has remained relatively unclear. Here, we simulated single-cell data with varying dispersion and dataset size to assess the relative responsiveness of the Gini index, variance-to-mean ratio, variance, and Shannon entropy to variability in single-cell counts. We found that the variance-to-mean ratio scales approximately linearly with increasing dispersion, and that it is scale-invariant. The Gini index displayed paradoxical behavior, and Shannon entropy was not scale-invariant. Thus, we applied the variance-to-mean to measure transcriptional variability in two publicly available datasets studying congenital heart defects in mouse models. We first found that change in transcriptional variability does not correlate with gene characteristics such as transcript level and evolutionary gene age. We also found that using change in transcriptional variability to focus GSEA and TF motif enrichment analyses revealed both genes with known involvement in cardiomyopathy and new genes and pathways as potential targets for future study. Notably, many of the genes and pathways identified through transcriptional variability analysis were not found by differential expression analysis, suggesting that transcriptional variability can provide additional biologically relevant information beyond what is observed from studying mean expression alone.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform</title>
<link href="https://hdl.handle.net/1721.1/162686" rel="alternate"/>
<author>
<name>Heiberger, Harry G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162686</id>
<updated>2025-09-19T04:48:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Expanding Annotation to Mixed-Media Types in a Large-Scale Social Annotation Platform
Heiberger, Harry G.
In recent years, social annotation systems have become a popular and effective tool for hosting collaborative discussions on assigned readings. One such tool created by our lab is NB. Over the last twelve years, hundreds of instructors have incorporated NB within their classes, with over 50,000 students leaving millions of annotations [1]. While feedback for NB has mostly been positive, one major limitation is its difficulty in annotating documents with nested media types. As multimodal forms of learning beyond just text are becoming increasingly common in educational assignments, having the ability to annotate beyond simple text documents would greatly increase the utility of NB in the modern classroom. This work seeks to remedy this issue by expanding the types of documents NB can successfully annotate, specifically focusing on three mixed-media issue types: independently moving text components, image annotation, and video annotation. We will explore the design space of possible implementation strategies for these features and discuss the specific design decisions that were made when adding them to NB. We hope that by increasing the types of documents NB can annotate, we will better fulfill its goal of enhancing student engagement and learning.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes</title>
<link href="https://hdl.handle.net/1721.1/162685" rel="alternate"/>
<author>
<name>Eppinger, Aria R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162685</id>
<updated>2025-09-19T04:48:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pareto Task Inference Analysis of Single-Cell RNASequencing of Human Placenta Reveals Biological Insightsinto Adverse Pregnancy Outcomes
Eppinger, Aria R.
Adverse pregnancy outcomes (APOs), such as preeclampsia, fetal growth restriction, and preterm birth, occur in 10-15% of pregnancies. There is limited knowledge of how the cellular states in the placenta and decidua tissues are altered in women with particular APOs or may contribute to APOs. Single-cell RNA sequencing (scRNAseq) approaches have characterized cellular populations and interactions at the maternal-fetal interface using traditional dimensionality-reducing methods such as UMAP-based clustering. However, these techniques may generate limited representations of nuanced cellular functions and biological relationships among and within cell clusters. Pareto Task Inference (ParTI), a dimensionality reduction technique that fits data to an n-dimensional polygon or polytope, models how cells optimize among multiple biological functions and transition between states. We applied ParTI to assess its ability to identify nuanced cellular states and intercellular relationships and to highlight biological mechanisms underlying specific APOs. We analyzed scRNAseq data from 50 whole placental homogenates collected from healthy pregnancies and those complicated by fetal growth restriction (FGR), preterm preeclampsia (PrePET), spontaneous preterm birth (PTB), term preeclampsia or gestational hypertension (TermPET/GHTN), or type 1 diabetes (DM1). ParTI was applied to the dataset with 1) all main cell lineages (B-cells, trophoblasts, stromal, endothelial, Haufbauer, T-NK, maternal myeloid cells) and 2) syncytiotrophoblasts (SCTs), a sublineage of trophoblasts. Marker genes and gene set enrichment analysis for the ParTI polytope vertices, called archetypes, were performed to assess the biological states associated with the archetypes. We demonstrated that the ParTI polytope can separate both broad cell lineages and sublineages, suggesting that iteratively applying ParTI can serve as an alternative clustering approach when cell-lineage marker genes are previously known. Additionally, ParTI applied to SCTs separated healthy controls from pregnancies complicated by specific APOs. Gene set enrichment analysis of the cells proximal to the archetypes suggests biological differences in SCTs with specific APOs compared to the controls. Thus, ParTI can identify biological mechanisms underlying specific APOs and be applied to additional datasets to uncover biological relationships among and within cell-type clusters.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal</title>
<link href="https://hdl.handle.net/1721.1/162684" rel="alternate"/>
<author>
<name>Cuevas, Elie E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162684</id>
<updated>2025-09-19T04:49:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Recursion with Iteration: Enabling LLVM Loop Optimizations for Recursive Data Structure Traversal
Cuevas, Elie E.
Recursive algorithms are a natural and expressive way to traverse complex data structures, but they often miss opportunities for optimization in modern compiler infrastructures like LLVM. This thesis explores a novel technique that temporarily transforms recursive traversals into synthetic loop-like structures, enabling existing loop-specific optimizations to apply, before transforming them back. By extending Clang’s semantic analysis and implementing a custom LLVM transformation pass, recursive traversals are initially structured into synthetic loops that can benefit from existing loop analyses and optimizations. After these optimizations are applied, the transformation restores the original recursive semantics, preserving program behavior while incorporating performance gains. Evaluation across custom microbenchmarks shows that while general recursive traversals suffer a modest overhead, workloads designed to benefit specific loop-focused optimizations achieve up to a 30% performance improvement. This demonstrates that even though the approach requires temporarily "misrepresenting" code to the compiler, selective exposure of recursive patterns to loop-based optimization infrastructure is practical and effective. This work establishes a proof-of-concept for compiler transformations that bridge recursion and iteration, paving the way for future systems that better optimize real-world recursive code without sacrificing clarity or maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grounding Time Series in Language: Interpretable Reasoning with Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/162683" rel="alternate"/>
<author>
<name>Chen, Lily</name>
</author>
<id>https://hdl.handle.net/1721.1/162683</id>
<updated>2025-12-09T18:21:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Grounding Time Series in Language: Interpretable Reasoning with Large Language Models
Chen, Lily
Can large language models (LLMs) classify time-series data by reasoning like a domain expert—if given the right language? We propose a method that expresses statistical time-series features in natural language, enabling LLMs to perform classification with structured, interpretable reasoning. By grounding low-level signal descriptors in semantic context, our approach reframes time-series classification as a language-based reasoning task. We evaluate this method across 23 diverse univariate datasets spanning biomedical, sensor, and human activity domains. Despite requiring no fine-tuning, it achieves competitive accuracy compared to traditional and foundation model baselines. Our method also enables models to generate expert-style justifications, providing interpretable insights into their decision-making process. We present one of the first large-scale analyses of LLM reasoning over statistical time-series features, examining calibration, explanation structure, and reasoning behavior. This work highlights the potential of language native interfaces for interpretable and trustworthy time-series classification.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>National crop field delineation for the United States</title>
<link href="https://hdl.handle.net/1721.1/162681" rel="alternate"/>
<author>
<name>Chen, Zitong</name>
</author>
<id>https://hdl.handle.net/1721.1/162681</id>
<updated>2025-09-19T04:48:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">National crop field delineation for the United States
Chen, Zitong
Comprehensive and accurate crop field boundary maps are crucial for digital agriculture, land management, and environmental monitoring. However, no high-quality field boundary dataset is publicly available in the United States. This thesis addresses this gap by creating a new, large dataset and training a deep learning model capable of mapping field boundaries. We built a dataset of over 15,000 image-mask pairs using high-resolution National Agriculture Imagery Program (NAIP) satellite imagery and curated field boundary labels. This dataset covers a variety of leading agricultural states and includes images taken at different scales to capture a wide variety of field sizes and layouts. We used this dataset to train an adapted ResUNet++ neural network model designed to segment crop fields. The trained model achieved around 0.8 for pixel-level accuracy, showing it can generally identify field areas well. However, its performance in matching predicted individual field instances with the ground truth instances (measured by mean instance Intersection over Union, or mIoU) was around 0.5. This lower instance score was largely due to the post-processing step, which converts the model’s probability predictions into separate field instances. Despite this, the field polygons produced by our approach are visually coherent with satellite field images and can be readily used with geospatial tools like Google Earth Engine. Our work provides a practical starting point for future research on mapping fields across the contiguous U.S. Potential directions for improvements may involve developing sharper boundary predictions, exploring direct instance segmentation models, refining post-processing methods, and expanding the dataset to include more challenging areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex</title>
<link href="https://hdl.handle.net/1721.1/162679" rel="alternate"/>
<author>
<name>Hanly, Bianca Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/162679</id>
<updated>2025-09-19T04:48:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Design and Implementation of an Analog High Power Broadband Self Interference Canceller for In Band Full Duplex
Hanly, Bianca Marie
A Self-Interference Canceler is the principle component that allows for Simultaneous Transmit And Receive (STAR) of radio signal broadcasting. Previous research and designs by other groups have resulted in systems that either operate at high powers or are capable of cancellation over a wide bandwidth. This work seeks to build upon previous research in order to design an analog SIC that is capable of both high power (∼100W) and wide instantaneous bandwidth (∼1GHz) cancellation. The system is designed as a vector modulator using off-the-shelf hybrid couplers and switches with a custom variable attenuator designed using PIN diodes in a Waugh attenuator architecture. The system was fabricated on a four layer PCB and measured with a network analyzer. Simulated results for variable attenuator and overall vector modulator are presented.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System</title>
<link href="https://hdl.handle.net/1721.1/162678" rel="alternate"/>
<author>
<name>Francis, Zachary R.</name>
</author>
<id>https://hdl.handle.net/1721.1/162678</id>
<updated>2025-09-19T04:48:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Implementation of Semantic SLAM on a Mobile&#13;
Manipulator System
Francis, Zachary R.
In the field of robotics, the development of household robots capable of performing everyday tasks continues to be a major area of research and practical interest. Many domestic chores—such as picking up and moving objects from one location to another—have been successfully performed by stationary robotic manipulators paired with visual perception systems. However, accomplishing more complex, varied, and spatially distributed tasks in real-world home environments requires a mobile platform with a more human-like form factor. These tasks demand greater flexibility, spatial awareness, and interaction capabilities than fixed systems can typically provide. This work focuses on the RBY1 robot from Rainbow Robotics, a humanoid platform designed to support advanced manipulation and mobility. A range of tools and modules were developed to enhance its functionality, including software for semantic perception, task execution, and environment interaction. This thesis provides a technical overview of these tools, highlighting their roles in collecting new datasets that can be used for semantic SLAM research. In the future, these tools can enable the robot to operate more effectively in domestic settings, towards the ultimate goal of enabling more capable home-assistive robots.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)</title>
<link href="https://hdl.handle.net/1721.1/162677" rel="alternate"/>
<author>
<name>Cunningham, Caroline K.</name>
</author>
<id>https://hdl.handle.net/1721.1/162677</id>
<updated>2025-09-19T04:48:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Improving Introductory Computer Science Students’&#13;
Programming Process When Using a Generative AI Tutor&#13;
(PyTutor)
Cunningham, Caroline K.
This thesis examined students’ programming process while using PyTutor, a generative AI tutor for introductory computer science students. This thesis had the research questions: (1) How does the process of test case creation, with or without PyTutor’s Test Case Runner, impact students’ programming process while using PyTutor? (2) How can prompt engineering of PyTutor’s system prompt be leveraged to improve AI Chat response quality with respect to: (a) reducing the amount of code revealed in the answer, (b) improving the conciseness of responses, and (c) having the AI chat give the student test cases as a tool to understand code correctness? (3) How does PyTutor’s responses from the updated prompt affect the programming process for computer science students? A key finding from a focus group in the first stage (n=9) was apart from test cases and was that the majority of participants who asked questions to PyTutor received at least three lines of code, unideal for PyTutor’s pedagogical purpose. This discovery inspired the next phase of this thesis of prompt engineering PyTutor, which resulted in an updated prompt. Responses from the both the updated prompt and the original prompt were scored using an evaluation rubric. For the Students thinking through problem category of the evaluation rubric, it was statistically significant that the distribution of points for responses from the updated prompt was greater than the distribution of points for responses from the original prompt. Finally, participants were asked to solve a programming problem using either PyTutor with the updated prompt (n=10) or PyTutor with the original prompt (n=2). Across the focus groups from the first and final stage, I found that fewer participants who used PyTutor with the updated prompt received at least three lines of code. Furthermore, participants who used PyTutor with the updated prompt required a greater number of messages to first receive three lines of code. Additionally, all four participants who received at least three lines of code from PyTutor with the updated prompt asked majority high-level questions. As participant feedback suggested that PyTutor’s responses for high-level questions could be repetitive, this data highlights a new direction of improving PyTutor’s responses when answering high-level questions to benefit students’ programming process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A set-theoretic approach to state estimation.</title>
<link href="https://hdl.handle.net/1721.1/162614" rel="alternate"/>
<author>
<name>Hnyilicza, Esteban.</name>
</author>
<id>https://hdl.handle.net/1721.1/162614</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1969-01-01T00:00:00Z</published>
<summary type="text">A set-theoretic approach to state estimation.
Hnyilicza, Esteban.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Bibliography: leaves 112-113.
</summary>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High Precision Binary Trait Association on PhylogeneticTrees</title>
<link href="https://hdl.handle.net/1721.1/162565" rel="alternate"/>
<author>
<name>Balogun, Ishaq O.</name>
</author>
<id>https://hdl.handle.net/1721.1/162565</id>
<updated>2025-08-28T03:07:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High Precision Binary Trait Association on PhylogeneticTrees
Balogun, Ishaq O.
Understanding how genetic variation drives microbial phenotypes is fundamental to advancing microbiology, particularly in pathogenicity, drug resistance, and host adaptation. Traditional genome-wide association study (GWAS) methods fail to account for shared evolutionary history, confounding association analyses. Microbial GWAS approaches emerged to address this, but modern methods often lack the statistical power to detect associations while controlling false discoveries, and face computational limits at scale. Here, we present SimPhyNI (Simulation-based Phylogenetic iNteraction Inference), a computational framework for detecting binary trait-trait associations in microbial populations. &#13;
&#13;
SimPhyNI uses stochastic simulations of trait evolution on phylogenetic trees to detect positive and negative associations with high precision and recall. Benchmarking on large synthetic datasets, SimPhyNI achieved a precision-recall AUC (PR AUC) of 0.987 and 0.975 for positive and negative interactions, respectively, indicating near-perfect discrimination of true from neutral associations. Competing methods showed substantially lower performance, especially for negative associations. We further applied SimPhyNI to empirical datasets, recovering known biology and generating plausible hypotheses for novel mechanisms. &#13;
&#13;
Though tested here on binary traits, SimPhyNI’s design supports future extension to multi-state and continuous traits using generalized models. Its high recall also makes it well-suited for constructing gene interaction networks and identifying co-evolving trait modules. By combining evolutionary modeling with scalable statistics, SimPhyNI advances our ability to uncover the genetic interactions that drive microbial function, ecology, and disease.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices</title>
<link href="https://hdl.handle.net/1721.1/162564" rel="alternate"/>
<author>
<name>Rajan, Neena E.</name>
</author>
<id>https://hdl.handle.net/1721.1/162564</id>
<updated>2025-08-28T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Foundations for Building an Innovation-Centric Product Development Framework for Medical Devices
Rajan, Neena E.
The medical device industry, governed by a tight regulatory landscape, often relies heavily on structured Product Development Processes (PDPs) to bring innovative solutions to market. These structured processes create significant challenges when integrating technological innovations that emerge in the later stages of the development cycle. This study explores the complexities of this "innovation paradox" within large United States-based medical device corporations, examining how the rigidity of traditional PDP models affects the incorporation of innovative changes to in-flight projects. Drawing upon insights from a comprehensive literature review and a quantitative analysis utilizing a Monte Carlo simulation, this research highlights the impact of integrating an innovative change on the overall project timeline and cost. The simulation results show that introducing innovative changes to the PDP typically extends project timelines and increases the total net present costs and are affected by the timing of the change and its technological maturity. Introducing changes in later project phases significantly increases both duration and cost compared to earlier phases. Changes with lower technological maturity led to greater duration and cost escalations, especially when introduced late in the development cycle. To balance regulatory requirements and PDP agility, large medical device companies can adopt hybrid PDP models, establish dedicated innovation assessment teams, create flexible product designs, and focus on value-driven innovations that meet patient and market needs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Savaal: A system for automatically generating high-quality questions from unseen documents</title>
<link href="https://hdl.handle.net/1721.1/162563" rel="alternate"/>
<author>
<name>Chandler, Joseph A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162563</id>
<updated>2025-08-28T03:07:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Savaal: A system for automatically generating high-quality questions from unseen documents
Chandler, Joseph A.
Assessing human understanding through exams and quizzes is fundamental to learning and advancement in both educational and professional settings. However, current solutions to automate the generation of challenging questions from educational materials and documents are insufficient, resulting in superficial or often irrelevant questions. While LLMs have been shown to excel in tasks like question answering, their usage on question generation is underexplored for general domains and at scale. This work presents Savaal, a scalable question-generation system that generates higher-order questions from documents, as well as a real-world system implementation for general use. Savaal accomplishes the following goals and objectives: (i) scalability, capable of generating hundreds of questions from any document (ii) depth of understanding, synthesizing higherorder concepts to test learners’ understanding of the material, and (iii) domain independence, generalizing broadly to any field. Rather than naively providing the entire document in context to an LLM, Savaal breaks down the process of generating questions into a three-stage pipeline. We demonstrate that Savaal outperforms the direct prompting baseline as evaluated by 76 human experts on 71 documents across conference papers and PhD dissertations. We additionally contribute a general system for serving Savaal in real-world scenarios. We demonstrate that our system is scalable, enabling fault-tolerant and horizontal scaling of each individual component in response to fluctuations in usage. Moreover, our architecture enables interactive usage from users and collaboration in groups, reflecting real-world organizations like classrooms or enterprises. We hope that the system enables scalable question generation for educational and corporate use-cases.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation</title>
<link href="https://hdl.handle.net/1721.1/162562" rel="alternate"/>
<author>
<name>Terakado, Daiki</name>
</author>
<id>https://hdl.handle.net/1721.1/162562</id>
<updated>2025-08-28T03:07:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Exploration of Refueling Architecture for Sustainable Crewed and Cargo Space Transportation
Terakado, Daiki
This thesis presents a new integrated framework for evaluating in-space refueling architectures, focusing on their application to the human space missions such as Artemis. The framework tightly couples vehicle sizing with a boil-off control model, allowing the evaluation of various combinations of propellant types, refueling locations, and boil-off control. The model captures the dynamic interdependence between the components of the refueling system, the transport vehicle, the refueler, and the depot, using an iterative approach to ensure consistent mass estimates across configurations.&#13;
&#13;
The framework is applied to analyze human landing system (HLS) architectures with refueling in cis-lunar space. The key findings highlight the mass savings benefits of cryocoolers, the benefits of high Isp with Lox/LH2, the benefits with NRHO refueling for acceptable ΔV requirement, and positive and negative effects of reusability in mass and mission time. Furthermore, the study indicates that the number of required refueling events is more sensitive to payload and refueler capacity than to boil-off losses.&#13;
&#13;
To extend the framework toward long-term, scalable transportation solutions, the thesis compiles a comprehensive set of figures of merits (FoMs) and discusses future model extensions including risks, ISRU, and electric propulsion. Limitations such as lack of reusable configuration flexibility, and insufficient support for Mars mission parameters are identified as areas for future development. This work provides a foundational framework for the exploration of refueling architecture and solid next steps to design sustainable and scalable human space transportation systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedded Software-Defined Radio Architectures for 6G Cellular Networks</title>
<link href="https://hdl.handle.net/1721.1/162561" rel="alternate"/>
<author>
<name>Urbonas, Jonas</name>
</author>
<id>https://hdl.handle.net/1721.1/162561</id>
<updated>2025-08-28T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Embedded Software-Defined Radio Architectures for 6G Cellular Networks
Urbonas, Jonas
Over the past decades, the widespread adoption of wireless communication technologies in the industrial, scientific, medical, defense, and commercial sectors has resulted in substantial advancements in digital radio technologies. Each new generation of cellular technology, beginning with 1G, has introduced novel use-case scenarios that have challenged the performance of the prevailing digital radio architectures. The newly proposed scenarios for 5G-Advanced, and the upcoming 6G cellular networks due to be standardized by 2030 are no exception. The emerging 6G network components, such as the space-air-ground integrated cell-less networks, as well as the artificial intelligence-native network architecture, drive the demand for flexible and fully reconfigurable radio units supporting multi-GHz instantaneous signal bandwidths, frequency agile radio architectures covering multi-octave frequency ranges, and highly sensitive receivers.&#13;
&#13;
To support these requirements, software-defined radios (SDR) are becoming an essential building block of next-generation radio networks. This thesis presents a review of softwaredefined radio technology, examines its history, proposes the requirements of SDR units for 6G cellular networks, and presents a quantitative performance analysis of over 2 million distinct SDR architectures that could be used in 6G communication networks. It does so by defining the key system architectural decisions and their options, including the data converters, filters, mixer and amplifier technologies. It also examines different radio transmitter and receiver architectural topologies, including baseband sampling, IF sampling, direct RF sampling, and fully digital RFSoC, and constructs a multi-attribute utility (MAU) to quantify the system performance. The MAU is used to build a tradespace of SDR architectures, enabling the identification of the Pareto frontier. Analysis of SDR system architectures on the Pareto frontier reveals that the performance of direct RF sampling SDR architectures is highly competitive with industry-standard IF sampling. The tradespace is also used to analyze the sensitivity of system performance to individual architectural decisions via a main-effect analysis, allowing quantification of connectivity and sensitivity of available architectural decisions. Sensitivity analysis reveals that system performance is highly sensitive to receiver architectural decisions, particularly analog-to-digital converters, indicating the need for continued advances in this technology to produce high-performance SDR systems.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Design of Architected Lattices for Construction Applications</title>
<link href="https://hdl.handle.net/1721.1/162560" rel="alternate"/>
<author>
<name>Leamon, Sophie</name>
</author>
<id>https://hdl.handle.net/1721.1/162560</id>
<updated>2025-08-28T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Design of Architected Lattices for Construction Applications
Leamon, Sophie
Architected lattices have been utilized in aerospace and research applications for their modularity, scability, reconfigurability, and high strength-to-weight properties. However, voxels have yet to find widespread integration in the residential or commercial construction industry because of the industry’s distinct system needs. This study identifies the pain points unique to the construction industry that have slowed or disabled the adoption of new practices, highlighting the importance of utilizing known materials, methods, and the transparency of the design process, as major hurdles to adoption of innovation in the industry. This study presents a computational approach to designing architected lattices that seeks to undermine these core issues by making building with architected lattice structures agnostic to material and manufacturing methodology. Three open source computational approaches to architectural design are proposed: 1) integration of support structures for additively manufactured structures; 2) parametric design of voxels from 2D material, their manufacturing molds, and optional alignment features; and 3) generation of two-dimensional cut files for assembly with 3D printable joinery. These files are computationally designed and arranged for instantaneous production to demystify the lattice architectural design process, establish a pathway for utilizing all available materials in lattice construction, reduce the overhead costs for experimentation with lattice structures, and eliminate barriers to the fabrication process by enabling accessible manufacturing methods.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations</title>
<link href="https://hdl.handle.net/1721.1/162559" rel="alternate"/>
<author>
<name>Delkowski, Michal  .</name>
</author>
<id>https://hdl.handle.net/1721.1/162559</id>
<updated>2025-08-28T03:07:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evaluating the Strategic Intent and Competitive Dynamics of China’s Satellite Communications Constellations
Delkowski, Michal  .
This thesis examines the strategic, technical, and economic feasibility of China’s two flagship low Earth orbit (LEO) satellite megaconstellation programs, Guowang and Qianfan, in the context of the rapidly evolving global satellite communication (Satcom) market. Against the backdrop of SpaceX’s Starlink dominance and intensifying geopolitical competition, China’s efforts represent not only a telecommunications infrastructure push but also a broader assertion of technological sovereignty and global influence. This study uses a scenario-based analysis that integrates system throughput analysis and financial forecasting. Three deployment scenarios (base, optimistic, and pessimistic) are analyzed, accounting for satellite production rates, launch capabilities, and regional adoption patterns, particularly across Belt and Road Initiative (BRI) markets. The study also evaluates "system-of-systems" integration with China’s military objectives, and spectrum coordination challenges. Key findings reveal that Guowang becomes marginally viable only in the optimistic scenario, assuming deployment of at least 9,000 satellites, reduced satellite unit costs (targeting ~$300,000 per satellite), expanded gateway infrastructure, and realization of these targets by 2035, while remaining unviable in base and pessimistic cases. Qianfan faces greater commercial risk, achieving viability only with early adoption in BRI countries and government dual-use contracts, incurring a pessimistic-case NPV loss exceeding $76B. Resource allocation problem (RAP) modeling suggests that projected throughput may saturate early without major gateway expansion. Both constellations require China to scale reusable rockets and sustain a combined annual launch rate exceeding 1,000 satellites by the early 2030s. Neither constellation system meets China’s 2030 rural broadband targets under base-case conditions, over 40% of the 336M unconnected citizens remain underserved without terminal subsidies. Ultimately, China’s LEO Satcom strategy depends not on satellite count alone but on coordinated progress in launch economics, affordability, dual-use policy, and international partnerships.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of CPG budgets in Retailer-led marketing&#13;
programs</title>
<link href="https://hdl.handle.net/1721.1/162558" rel="alternate"/>
<author>
<name>Gandhi, Abhinav</name>
</author>
<id>https://hdl.handle.net/1721.1/162558</id>
<updated>2025-08-28T03:07:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization of CPG budgets in Retailer-led marketing&#13;
programs
Gandhi, Abhinav
Grocery retailers and Consumer Packaged Goods (CPG) companies have a symbiotic relationship. Retailers need CPGs to supply the products, and CPGs need retailers’ customers to grow their brands. Since shelf space is limited, CPGs offer trade and marketing funds to prominently feature their brands.&#13;
As part of loyalty programs, retailers offer coupons to customers that are often funded by CPGs. In return, CPGs expect a return on their investment(ROI). Since budgets are limited and are also expected to be utilized, it becomes a challenge for the retailer to find the right size of a mailer which can balance costs and relevance to customers. This thesis explores how knapsack problems can be used in an non-adaptive setting to help maximize the reach of print and email campaigns.&#13;
Seeking inspiration from existing literature, multiple simulations were set up to evaluate budget-constrained allocation and compare two approaches, the multiple-choice Knapsack (MCK) and a greedy algorithm. Considering uncertainty in redemption, the Newsvendor model was also explored to review the possibility of over-allocation to improve budget utilization and increase reach. The preliminary analysis findings offer promising results and provide a setting for further research.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration of design strategies and optimization for efficient mass timber structures as a function of column position</title>
<link href="https://hdl.handle.net/1721.1/162557" rel="alternate"/>
<author>
<name>Gerken, Christoph</name>
</author>
<id>https://hdl.handle.net/1721.1/162557</id>
<updated>2025-08-28T03:07:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Exploration of design strategies and optimization for efficient mass timber structures as a function of column position
Gerken, Christoph
The building sector is responsible for a large share in global carbon emissions. As the load bearing structure is particularly material-intensive, a decisive shift can be achieved by improving its design and decreasing its volume. This thesis examines how structural mass timber floor systems can be designed in an efficient, low-waste manner through a design-oriented approach that is immediately applicable within the context of conventional construction techniques and building practices. Reducing material in timber structures has economical and ecological benefits. Reduced timber demand entails significant cost savings and decreased building weight which considerably cuts embodied carbon.&#13;
Since common floor systems mainly act in bending, this work focuses on the reduction of moment forces in standard setups comprised of timber slabs, beams, and columns. In principle, bending forces in beams and slabs can be reduced by moving the supports inwards, leading to overhanging structural elements. The original method presented in this thesis shows how this approach applies to conventional mass timber floor systems. This work provides an understanding of how informed column positioning can take advantage of this behavior and allows for material and embodied carbon reduction trough design. The consequent architectural implications of the resulting irregular column grid are explored in a floor plan design suggestion&#13;
Material demand and embodied carbon are evaluated as a function of column position through finite element analysis and optimization as part of a computational model. By consulting a mass timber manufacturer’s catalogue to assign appropriate products to structural members, this approach enables material reduction in the design process rather than in the production. Bypassing slow-changing, inert fabrication procedures, this method can be realized instantaneously.&#13;
This work identifies the optimal support position to reduce bending forces in beams and slabs to be at 41% of the distance from the element’s edge to its midspan. Furthermore, this research finds that the impact of ideal column position on material efficiency depends on required minimum effective spans. While being negligible in the absence of constraints, informed column positioning can reduce timber demand by 20% and embodied carbon by 16% when subjected to a minimum effective span requirement of 6 m – a common span in timber construction – in a building of 30x30 m and five floors. Building dimensions are found to have an insignificant impact on these results.&#13;
This thesis illustrates the potential for architects and engineers to enhance structural efficiency of mass timber floor systems merely by deviating from the usual, regular column grid and taking advantage of straightforward structural principles through design.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges</title>
<link href="https://hdl.handle.net/1721.1/162556" rel="alternate"/>
<author>
<name>Fayad, Fred</name>
</author>
<id>https://hdl.handle.net/1721.1/162556</id>
<updated>2025-08-28T03:07:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Machine Learning for the Condition Assessment of&#13;
 Concrete Bridges
Fayad, Fred
The assessment of concrete bridge conditions is critical for ensuring structural integrity and public safety. Traditional inspection methods, which rely heavily on visual inspections and manual assessments, are time-consuming, subjective, and prone to human error. With the increasing number of aging bridges worldwide, there is a growing need for more efficient and accurate methods to assess bridge health. This thesis aims to explore the application of machine learning techniques for automating the bridge condition assessment process and improving the accuracy and reliability of bridge evaluations.&#13;
 This study investigates the development and implementation of a model consisting of two machine learning algorithms to predict the condition of concrete bridges based on data collected from various public sources. The first algorithm appraises the structural health of a bridge based on bridge rating and the second algorithm assesses the condition of a bridge after a specific failure mechanism. Specifically, this work focuses on using classification algorithms such as Random Forest (RF), XGBoost, and Neural Networks (NN) in both algorithms to achieve their purpose.&#13;
 The results of this study demonstrate that machine learning models can provide a decent performance in predicting bridge conditions. The overall model achieved a testing accuracy of 79%. This research contributes to the field of civil engineering by showcasing the potential of machine learning in infrastructure management. By automating the assessment process, the proposed models can help reduce the time and cost of inspections while providing more accurate data to guide maintenance planning and bridge rehabilitation efforts. Future work will focus on further optimizing the models, incorporating additional data sources, and deploying the system for real-time bridge monitoring.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics</title>
<link href="https://hdl.handle.net/1721.1/162555" rel="alternate"/>
<author>
<name>Van Note, Lana</name>
</author>
<id>https://hdl.handle.net/1721.1/162555</id>
<updated>2025-08-28T03:08:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Comparison of Theoretical and Actual Coumarin Exudation Under Iron Limitation to Understand Root Exudation Mechanics
Van Note, Lana
Nutrient cycling is an important component of plants’ immune systems, largely driven by the act of exuding environmentally influential metabolites from roots. Root exudation may be driven by multiple unique mass-transport mechanisms, including active and passive transport types, though the latter is not well-studied despite being labelled a significant driver of low molecular weight metabolite exudation. This research investigates the generally accepted assumption that low molecular weight metabolites, including iron-fixing coumarins (scopoletin, fraxetin, etc.) are primarily exuded passively,  and high molecular weight metabolites follow an active exudation approach. Scopoletin and scopolin exudation from Arabidopsis thaliana in low-iron and replete conditions is quantified to determine if the hypothesized passive diffusion mechanism is a significant contributor to coumarin exudation. LC-MS analysis suggests that passive diffusion of scopoletin and scopolin from roots plays a significant role in total coumarin exudation values. Further research should include investigating the implications of passive coumarin exudation on long-term iron storage and soil health in addition to the relationship between coumarin production and exudation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center</title>
<link href="https://hdl.handle.net/1721.1/162554" rel="alternate"/>
<author>
<name>Athanasopoulos, Panagiotis Rafail</name>
</author>
<id>https://hdl.handle.net/1721.1/162554</id>
<updated>2025-08-28T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Offshore floating solar with compressed air storage as a&#13;
baseload power plant for a data center
Athanasopoulos, Panagiotis Rafail
This thesis presents the conceptual design, technical modeling, and economic analysis of a novel offshore floating solar energy system integrated with Compressed Air Energy Storage (CAES) for reliable baseload power delivery to coastal data centers. The system architecture is modular, consisting of multiple “powercells,” each comprising a 5×5 photovoltaic (PV) array mounted above a matrix of submerged compressed air storage cylinders anchored below the floating platform, addressing the energy resilience and spatial constraints of coastal computing infrastructure. This scalable configuration enables distributed energy collection and localized storage, tailored to meet site-specific demands. Detailed thermodynamic modeling of both charging and discharging cycles is conducted, with analytical solutions validated against a full numerical implementation. Results show that under realistic operating assumptions, the temperature inside the storage vessels remains nearly isothermal due to the long charging duration and large heat exchange surface, enabling a simplified energy balance model.&#13;
&#13;
A techno-economic analysis evaluates both structural steel requirements and photovoltaic investment, benchmarked against market data from 2024. Key metrics such as structural cost per unit energy ($/kWh) and per rated power output ($/kW) are derived. The hybrid system is found to be economically competitive with lithium-ion (Li-ion) battery alternatives, offering extended lifespan (20–30 years), lower material costs, and enhanced sustainability through avoidance of critical minerals. Environmental and mooring considerations for offshore deployment are also addressed, demonstrating the feasibility of integrating energy generation, storage, and maritime infrastructure. This work advances the development of resilient, decarbonized energy systems aligned with global renewable energy targets and the rising demand for sustainable data center operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis</title>
<link href="https://hdl.handle.net/1721.1/162552" rel="alternate"/>
<author>
<name>Brower, Braden C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162552</id>
<updated>2025-08-28T03:08:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Destructive Behaviors in Naval Shipyards: A STAMP and System Dynamics Analysis
Brower, Braden C.
United States Navy Refueling and Complex Overhauls (RCOHs), and other extended maintenance availabilities, present uniquely demanding environments where Sailors face elevated risks for destructive behaviors, including suicide and substance abuse. Prolonged exposure to harsh industrial conditions, significantly degraded Quality of Service, demanding workloads, and critical manning shortfalls create cumulative stress distinct from operational duty. These destructive behaviors severely impact personnel’s well-being, erode force readiness through attrition and morale issues, and indicate systemic contributing factors as highlighted by recent investigations into carrier suicides during shipyard periods.&#13;
&#13;
This thesis utilizes Causal Analysis based on Systems Theory (CAST), grounded in systems thinking, to analyze the USS George Washington RCOH events and identify the underlying safety control structure flaws that contributed to this hazardous environment. Insights from the CAST analysis were then integrated with a qualitative System Dynamics model to better understand the feedback loops and dynamic interactions driving system behavior, particularly revealing a capability trap dynamic exacerbated by resource constraints and personnel pressures.&#13;
&#13;
The analysis identified critical, interacting systemic flaws across multiple organizational levels that contributed to the accident: (a) inadequate strategic resourcing and manning prioritization for RCOH personnel support, (b) deficient planning, risk management, and oversight processes that were ineffective at protecting Sailor well-being amidst budget and schedule pressures, (c) ineffective feedback mechanisms that prevented critical information from reaching decision-makers, (d) and reliance on flawed assumptions regarding the RCOH environment, Sailor resilience, and standard process adequacy. Based on these findings, the thesis provides actionable, systemically focused recommendations aimed at strengthening the Navy's safety control structure by improving decision makers’ mental models, enhancing feedback and oversight, enforcing well-being constraints, and fostering organizational learning. Combined, these recommendations empower leaders to proactively manage risks, reduce destructive behaviors, and ensure a safer, more resilient environment during future RCOHs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers</title>
<link href="https://hdl.handle.net/1721.1/162551" rel="alternate"/>
<author>
<name>Hoyt, Thomas S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162551</id>
<updated>2025-08-28T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enhancing Community Risk Preparedness for Flooding Emergencies: A System Dynamics Approach for the U.S. Army Corps of Engineers
Hoyt, Thomas S.
Flooding events pose a significant and growing threat to communities in the United States, particularly as climate change alters weather patterns and sea levels continue to rise. This thesis examines how the U.S. Army Corps of Engineers (USACE) can enhance community preparedness for flood emergencies through improved risk communication strategies. Focusing on the New England District as a representative case, it integrates data from the Federal Emergency Management Agency’s (FEMA) National Household Survey and the National Flood Insurance Program (NFIP) claims archive to develop and calibrate a System Dynamics model of flood risk perception and preparedness.&#13;
The model built in this thesis incorporates key variables and captures the feedback loops that influence community preparedness over time. Scenario testing demonstrates that monthly to quarterly engagements by USACE help sustain risk awareness and reduce flood-related damage, whereas less frequent engagement demonstrates minimal improvement above the baseline. By contrast, barriers to action, such as complex procedures or limited access to information, can substantially slow the adoption of preparedness measures. High levels of trust in authorities further amplify the effectiveness of risk communication and foster community engagement.&#13;
This model quantifies the importance of frequent engagement, low barriers to action, and trust-building initiatives in reducing flood impact. Through calibration against historical claims and survey data, the model provides a robust framework that can guide USACE and partner agencies in refining their own flood risk communication strategies to bolster community resilience.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement</title>
<link href="https://hdl.handle.net/1721.1/162550" rel="alternate"/>
<author>
<name>Stribos, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/162550</id>
<updated>2025-08-28T03:08:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Structural engineering model of irregular and efficient concrete beams: Application to topology optimized shapes and integrated textile reinforcement
Stribos, Sophia
Concrete remains one of the most widely used construction materials due to its strength, durability, and availability. However, it is responsible for a large share of the global carbon emissions. Within the 40% of the global emissions attributed to the building sector, 5-8% alone accounts for the production of cement, a key component in concrete. As the construction industry seeks innovations towards sustainable practices, alternative beam designs that improve material efficiency and introduce nontraditional reinforcement systems are emerging as promising potential. However, accurate structural models capable of predicting and validating the performance of these innovative beams are often lacking, limiting their implementation in the industry, primarily due to safety and code compliance.&#13;
This thesis bridges this gap by developing and validating a structural engineering model to predict the shear and flexural capacities and the deflection of irregular, efficiently shaped concrete beams, including those with alternative reinforcement and formwork. The model discretizes a 3D beam geometry into 2D sections to perform a geometric and structural cross-sectional analysis along the beam’s length. The structural engineering model is applied to two case studies: a topology-optimized steel-reinforced concrete beam and an integrated knit textile reinforced concrete beam, using experimentally measured material properties and beam testing data. The predicted engineering model results are compared against experimental data to validate the model’s accuracy.&#13;
While the model could accurately capture the behavior of the topology-optimized steel-reinforced beam, it slightly overestimated the strength of the knit-textile reinforced beam. The engineering model for the topology-optimized beam had a close alignment in flexural capacity and had a slightly conservative estimate in shear and deflection due to the nature of the design equations. However, the model showed a minor overprediction in the flexural capacity and deflection of the integrated knit textile beam. Discrepancies in this model were linked to inaccurate material properties, experimental imperfections, and prestressing effects. To ensure complete accuracy and reliability, additional beam analysis using this model is needed.&#13;
This research advances structural design by offering a tool for predicting the capacity and serviceability of irregular, efficiently shaped concrete beams, including those with alternative reinforcement. This thesis enables designers to validate and optimize their innovative beam designs and support their ideas as sustainable solutions within the concrete construction industry.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Decarbonization Pathways of Japan</title>
<link href="https://hdl.handle.net/1721.1/162549" rel="alternate"/>
<author>
<name>Suto, Sadami</name>
</author>
<id>https://hdl.handle.net/1721.1/162549</id>
<updated>2025-08-28T03:08:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Assessment of Decarbonization Pathways of Japan
Suto, Sadami
Developing realistic pathways for decarbonization is crucial for the success of climate change mitigation actions. To evaluate Japan’s pathways toward achieving carbon neutrality, this study enhances the MIT Economic Projection and Policy Analysis (EPPA) model and analyzes a suite of policy scenarios that combine domestic mitigation measures such as emissions targets from the updated Japan’s Nationally Determined Contribution (NDC), power mix goals, and availability of carbon capture and storage (CCS) with international emissions trading. The impacts on CO₂ emissions, GDP, consumption, carbon prices, and sectoral output in Japan between 2030 and 2050 are assessed.&#13;
&#13;
Under the baseline scenario, emissions over time remain flat at about 1,000 MtCO₂e, far exceeding the carbon neutrality goal. Even when Japan’s 2030 and 2040 NDC for CO₂ and power mix targets are fully achieved, residual emissions of 100 – 200 MtCO₂e remain, which calls for a need of carbon offsets. Relying on domestic-only measures is costly for Japan. In high-ambition domestic-only scenarios without CCS, carbon prices soar to over $46,000/tCO₂ by 2050, leading to GDP losses exceeding $1.5 trillion (23% of GDP) and significant contractions in key sectors of the economy.&#13;
&#13;
In contrast, scenarios incorporating international emissions trading enable Japan to achieve comparable total emissions reductions by partially relying on imported carbon credits. This mechanism significantly lowers marginal abatement costs, allowing carbon prices to stabilize at $20 –$30/tCO₂ and reducing GDP losses to about $100 billion (1.6% of GDP) by 2050.&#13;
&#13;
Scenarios that emphasize domestic reductions while flexibly using international credits emerge as manageable pathways. These scenarios achieve domestic emissions reductions of 40 – 60% by 2050, with carbon prices ranging from $140 to $340/tCO₂ and GDP losses contained between $150 and $290 billion (2.3% and 4.3% of GDP). Importantly, these scenarios incorporate the deployment of CCS, which plays a critical role in reducing marginal costs and enabling deeper abatement in hard-to-decarbonize sectors. Most industrial sectors maintain stable output, while carbon-intensive sectors undergo gradual structural transitions.&#13;
&#13;
Overall, these findings suggest that Japan can achieve carbon neutrality through an integrated strategy that combines strengthened domestic action, technological deployment, and international cooperation. This study provides a robust quantitative foundation for designing feasible, equitable, and cost-effective climate policies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience</title>
<link href="https://hdl.handle.net/1721.1/162548" rel="alternate"/>
<author>
<name>Ren, Daisy</name>
</author>
<id>https://hdl.handle.net/1721.1/162548</id>
<updated>2025-08-28T03:08:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Impact-Induced Bridge Failures: Analyzing Structural Vulnerabilities and Optimizing Pier Designs for Enhanced Resilience
Ren, Daisy
Due to the rise in global traffic patterns in recent years, bridge failures due to impact effects are becoming an increasing concern, especially for aging infrastructure. Following the recent collapse of the Francis Scott Key Bridge, issues regarding bridge vulnerabilities and design deficiencies arose, which highlighted the need for better design codes and protection for bridge piers. This study aims to address these issues by better understanding bridges' impact-related structural failure mechanisms by developing a comprehensive optimization framework to enhance the resilience of structures to dynamic impact forces using three phases: (i) statistical analysis of bridge failure data from the Multidisciplinary Center for Earthquake Engineering Research (MCEER), with data focusing on the frequency, bridge types, and bridge material trends associated with different bridge failures across the United States, (ii) development of a compliance-based optimization for trusses using MATLAB that is applied to 2D representations of pier structures for different truss configurations (2X3, 3X4, 3X5) under stress, load, and volume constraints to simulate large magnitude impact conditions, and (iii) design and validation of optimization results through mathematical calculations of compliance and strain energy to ensure consistency between numerical results and structural mechanics principles. Both fail-safe and shape optimization strategies are employed and compared across all truss configurations, revealing distinct design methodologies between maximum and minimum compliance optimizations and the trade-offs between stiffness and energy dissipation. Maximum compliance optimization designs demonstrate increased redundancy and strain energy capacity, while minimum compliance optimization designs showed increased efficiency but were more prone to brittle failure. The final study utilizing volume constraints further examined material distribution under realistic impact loads and highlighted the importance of distributed load paths and deformation capacity in structural performance. This work provides a design framework for energy-absorbing pier geometries and aims to offer insight into improving current design standards for pier designs to account for extreme events and help guide retrofitting efforts that could prevent future failures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computing Economic Equilibria and Their Applications to&#13;
Market Games</title>
<link href="https://hdl.handle.net/1721.1/162547" rel="alternate"/>
<author>
<name>Bruce, Samuel G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162547</id>
<updated>2025-08-28T03:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computing Economic Equilibria and Their Applications to&#13;
Market Games
Bruce, Samuel G.
The emergence of new technologies such as e-payments and tokenized assets, distributed ledgers, smart contracts and encryption have created new opportunities for improving access and equity in financial institutions. These new tools can be used to build better infrastructure and improve economic efficiency, especially in previously underdeveloped countries. The use of these tools in various applications however requires and intimate link between economics and computer science to ensure an implementation that is both computationally efficient and improves social welfare. There has been significant research in the field of computer science concerning the computation of economic equilibria, specifically Nash Equilibria and Correlated Equilibria. These algorithms, however, have not been used in many financial applications. Further, while research exists on various methods of computation for Correlated Equilibria, little exploration has been done evaluating the quality of these equilibria in terms of economic efficiency in specific mechanisms. This work provides a sweeping view of the existing literature on equilibrium computation as well as an analysis on the economic and algorithmic tradeoffs of different approaches. The discussion begins with simple 2-player, finite action games, then moves to more complex machine learning based method for equilibrium computation in difficult settings. One of these methods is then extended to a limit-order market game explicitly described by Dubey [1] and implemented, with small modifications, by SPEEDEX [2]. This limit-order game offers a continuous, vector-valued action space with complex payoff functions, causing tension with many of the equilibrium computation algorithms explored previously. This paper identifies these tensions, then offers modifications to algorithms which allow tractable, welfare improving approximate Coarse Correlated Equilibrium computation. Finally, there is a discussion on future work which aims to generalize the developed framework. The code corresponding to the equilibria computation will be released publicly in this repository [3].
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads</title>
<link href="https://hdl.handle.net/1721.1/162546" rel="alternate"/>
<author>
<name>Chang, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/162546</id>
<updated>2025-08-28T03:08:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimizing SigmaOS for Efficient Orchestration of Fault-Tolerant, Burst-Parallel Workloads
Chang, Ryan
SigmaOS is a multi-tenant cloud operating system designed for efficient orchestration of fault-tolerant, burst-parallel workloads. It provides users with isolated cloud environments called realms, where resources are accessed through a Unix-like filesystem interface, and supports applications built from procs—lightweight, rapidly-spawnable programs that can be both short-lived for bursty tasks or long-running and stateful for persistent services. However, the current prototype exhibits performance bottlenecks that hinder its scalability for larger, more demanding applications. This thesis addresses these limitations by introducing two key optimizations: (1) a rearchitected watch API, enhancing its efficiency and scalability for monitoring directory changes crucial for inter-proc coordination and event notification, and (2) a new ft/task server, providing a robust and high-performance mechanism for managing fault-tolerant bags of tasks, essential for applications like MapReduce. Through these enhancements, this work demonstrates significant improvements in SigmaOS’s performance on the MapReduce benchmark, showcasing improved scaling capabilities for larger cluster deployments, larger inputs, and more granular tasks. These optimizations are crucial steps towards enabling SigmaOS to effectively realize its vision as a scalable and performant platform for complex cloud workloads.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes</title>
<link href="https://hdl.handle.net/1721.1/162543" rel="alternate"/>
<author>
<name>Gomez, Samuel John</name>
</author>
<id>https://hdl.handle.net/1721.1/162543</id>
<updated>2025-08-28T03:07:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Modeling and Real-Time Optimal Control of Continuous Manufacturing Processes
Gomez, Samuel John
When faced with complex disturbances, continuous manufacturing processes require robust control and adaptability to maintain product quality and operational efficiency. Although advanced control strategies such as linear quadratic regulator, model predictive control, and adaptive control have demonstrated strong performance, many industrial processes still rely predominantly on classical proportional-integral-derivative (PID) controllers because of their simplicity, ease of implementation, and sufficient results.&#13;
&#13;
This thesis investigates the effectiveness of data-driven modeling techniques in capturing system dynamics more accurately than traditional physics-based models. It further examines using a high-fidelity digital twin, constructed from experimental data via linear system identification and nonlinear deep learning (NARX) approaches, to optimize PID controller parameters through simulation-based gradient descent methods.&#13;
&#13;
A comprehensive experimental platform was developed to collect synchronized sensor and video data from a roll-to-roll continuous manufacturing system, specifically targeting disturbance scenarios that cause process interruptions. The digital twin created from these data was validated against physical experiments and shown to outperform conventional physics-based models when predicting the system’s dynamic response to disturbance inputs.&#13;
&#13;
Optimal control of the system was explored by implementing a virtual PID controller that closely replicates the physical controller. Optimal gain settings were identified through simulation and applied to the physical manufacturing process. The experimental results showed a significant reduction in the mean squared error and the maximum web deviation. These results demonstrate the substantial potential of digital twin-driven, data-centric control approaches in enhancing resilience, efficiency, and adaptability in manufacturing processes. This research also lays the foundation for the future development of real-time, adaptive, and autonomous control strategies in industrial applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse</title>
<link href="https://hdl.handle.net/1721.1/162542" rel="alternate"/>
<author>
<name>Maruyama, Shun</name>
</author>
<id>https://hdl.handle.net/1721.1/162542</id>
<updated>2025-08-28T03:08:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Power and Progress in Japan: The Past, Present, and Future of Japan as a Tech Powerhouse
Maruyama, Shun
This paper analyzes Japan’s economic and technological history since the Meiji Restoration through the framework of Power and Progress proposed by Acemoglu and Johnson (2023), focusing on the concepts of direction of technology and productivity bandwagon. A historical review reveals that technological progress and the distribution of its benefits were not determined solely by market mechanisms or technological inevitability, but were shaped by the power dynamics among governments, companies, workers, and others. Periods when workers held strong bargaining power and inclusive social institutions were in place saw the emergence of a virtuous cycle, in which the direction of technology moved toward broad-based innovation and the productivity bandwagon functioned effectively. Conversely, after the collapse of the bubble economy, a shift in the power balance in favor of companies led to a rise in short-term cost-cutting, resulting in a divergence from inclusiveness and innovation in the direction of technology, as well as a breakdown of the productivity bandwagon. This ultimately undermined Japan’s ability to leverage the strengths of its production system and led to a decline in technological capabilities. Currently, a new wave of technological innovation centered on AI is emerging. However, its impact remains heavily dependent on existing employment practices and corporate behavior models, making a short-term shift in direction unlikely. In the medium-to-long term, however, the societal will and collective action may create an opportunity to rebuild a virtuous cycle. This paper proposes action guidelines for companies, workers, and the government, and argues that realizing true prosperity from technological progress requires reassessing existing power structures and actively choosing new pathways as a society.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog</title>
<link href="https://hdl.handle.net/1721.1/162541" rel="alternate"/>
<author>
<name>Chan, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/162541</id>
<updated>2025-08-28T03:08:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bluespec Language Server: Adapting Rust Analyzer for Bluespec SystemVerilog
Chan, Martin
The Language Server Protocol (LSP) and popularity of VS Code have facilitated the current ubiquity of smart code editing features like hover or goto-definition. These features are powered by language servers, which are programs that perform compiler-like functions at keystroke latency on potentially incomplete code. Mainstream languages like Rust or Python have the large userbases to motivate the creation of bespoke language servers like Rust Analyzer or Pylance. However, smaller languages like Bluespec SystemVerilog, used in computer architecture classes at MIT, often need to make do without a language server. As students come to expect smart code editing features, they may miss the convenience when working with languages like Bluespec. In this thesis, we present a Bluespec Language Server forked from Rust Analyzer. This involved adapting the Rust Analyzer parser, HIR, and other internals to work for Bluespec SystemVerilog. The resulting artifact supports the full suite of typical smart editing features for classroom-grade Bluespec projects and continues to mostly work for industrial-grade projects. We discuss the many changes and challenges required to adapt this language server to work for a different language than it was designed for. Further, to address the current gap in the literature covering language server implementation, we include thorough discussion of the overall system architecture and several important subsystems with significant overlap with Rust Analyzer's internals. Finally, we conclude with a discussion of current limitations of our language server and directions for future work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout</title>
<link href="https://hdl.handle.net/1721.1/162540" rel="alternate"/>
<author>
<name>Andrade, Marco A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162540</id>
<updated>2025-08-28T03:07:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">A Microelectromechanical-Cantilever Hydrogen Sensor with Palladium-Driven Bending and Piezoresistive Readout
Andrade, Marco A.
Hydrogen gas (H₂) is considered a promising source of environmentally friendly and sustainable energy of benefit for global decarbonization. However, given the flammable and explosive nature of H₂, highly sensitive and selective detection systems with fast response are needed to enable leakage monitoring to ensure safe deployment and use. To address this need, we propose a microelectromechanical (MEMS) platform for H₂ sensing with the aim of achieving sub-1-ppm sensitivity. Our platform employs a MEMS structure that has H₂-responsive palladium (Pd) features. Once exposed to H₂, the Pd lattice expands as H₂ diffuses into it. This results in the structural deflection of a mechanically-mobile feature, in particular a cantilever. This deflection is measured using piezoresistors, which are embedded in the cantilever using a spin-on glass doping process. Piezoresistors enable rapid high-accuracy detection and quantification of H₂, as will be shown in this thesis through a combination of modeling, sensor development, sensor fabrication, and basic experimental characterization. In this thesis, we have successfully developed a fabrication plan, demonstrated the two key aspects of our fabrication, namely beam release and piezoresistor fabrication, shown beam bending driven by absorption of hydrogen by palladium, and shown that our piezoresistors respond to beam bending. Our physical results match our theoretical predictions for a beam of size 100 µm by 20 µm and a resistor with resistance 115 kΩ fabricated on SOI chips. This beam could be used to detect H₂ below 1 ppm.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool</title>
<link href="https://hdl.handle.net/1721.1/162539" rel="alternate"/>
<author>
<name>Dale, William</name>
</author>
<id>https://hdl.handle.net/1721.1/162539</id>
<updated>2025-08-28T03:07:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Emergent Dynamics in AI-Enhanced Entrepreneurship&#13;
Education: A System of Systems Study of the Orbit Tool
Dale, William
The convergence of artificial intelligence and entrepreneurship education has opened a novel frontier in pedagogical innovation. The deployment of Orbit—a bespoke generative AI tool—within MIT’s 15.390 entrepreneurship course, which follows the structured Disciplined Entrepreneurship framework, is examined through a System-of-Systems perspective. This approach reveals how the tool functions not as an isolated feature but as an integrated element within a multifaceted educational ecosystem. Drawing on quantitative usage data across three consecutive academic semesters (Spring 2024-Spring 2025) complemented by course evaluation metrics, our mixed-methods approach reveals the multidimensional impact of AI-enhanced entrepreneurial education. The findings demonstrate that Orbit, particularly in its refined v2 iteration, functions as a powerful External Enabler that significantly reduces both the opacity and agency-intensity inherent in complex entrepreneurial frameworks. This enabling function manifested through measurable increases in student adoption, idea generation, and iterative engagement with critical DE steps. Beyond efficiency gains, we identify a substantive Transformation of Learning where students developed distinctly different engagement patterns—characterized by increased iteration, greater willingness to tackle complex entrepreneurial challenges, and enhanced overall course experiences. This transformation appears to deepen rather than merely accelerate learning, as evidenced by improved course evaluations alongside increased time investment in coursework. However, our analysis reveals that this transformation operates within the constraints of what we term AI’s "Jagged Frontier"—an uneven landscape of capabilities leading to differentiated impacts across DE tasks and student segments. The evolution from Orbit v1 to v2 underscores how thoughtful system design and curriculum integration critically influence the effectiveness of educational AI tools. This research contributes a nuanced understanding of how specialized AI tools can enhance entrepreneurship education while highlighting that their benefits depend on deliberate design choices, strategic pedagogical integration, and recognition of current technological limitations. The SoS framework proves instrumental in capturing these emergent dynamics, offering valuable insights for educational technologists, entrepreneurship educators, and institutions navigating the AI-enhanced learning landscape.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band</title>
<link href="https://hdl.handle.net/1721.1/162538" rel="alternate"/>
<author>
<name>Alsehali, Mohammed S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162538</id>
<updated>2025-08-28T03:07:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">System Design and Evaluation of Spectrum Management Architectures for Co-Primary Sharing in the 37 GHz Band
Alsehali, Mohammed S.
This thesis presents a system design framework for evaluating spectrum management architectures enabling co-primary access in the 37 GHz band. Motivated by increasing demand for mid-band and mmWave spectrum, and recent policy directions for federal-commercial sharing, this research investigates the trade-offs between utilization efficiency, coordination overhead, and interference performance across thousands of feasible spectrum management system.&#13;
&#13;
Using a morphological matrix, eight key architectural decisions were defined, including coordination topology, licensing mechanism, frequency planning, sensing mode, and access priority. A parametric event-driven simulation model was developed in Python to evaluate 2,808 valid architectures under low, medium, and high spectrum demand scenarios. The performance metrics, Spectrum Utilization Efficiency (SUE), Coordination Index (Cindex), and Blocking Probability (BP), were used to generate multi-dimensional tradespaces and identify Pareto-optimal solutions.&#13;
&#13;
Results indicate that semi-dynamic spectrum management systems with decentralized or hybrid coordination topologies consistently dominate the Pareto frontier across all demand levels. Compared to fully dynamic systems, semi-dynamic designs achieve 80–90% of the utilization efficiency with way less than 50% of the coordination cost. &#13;
&#13;
The results validate key hypotheses about performance trade-offs and offer actionable insights for regulators and system designers. This thesis recommends semi-dynamic, co-primary frameworks for initial 37 GHz implementation and proposes future research directions, including agent-based modeling, economic behavior integration, and acuarate physics modeling.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology</title>
<link href="https://hdl.handle.net/1721.1/162537" rel="alternate"/>
<author>
<name>Jezewska, Martyna</name>
</author>
<id>https://hdl.handle.net/1721.1/162537</id>
<updated>2025-08-28T03:07:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Architecting Innovation in Healthcare: A Case Study Review in Digital Pathology
Jezewska, Martyna
The Mayo Clinic, a renowned non-profit organization, has long been at the forefront of healthcare innovation. This thesis explores the implementation of digital pathology within the Mayo Clinic, focusing on its potential to enhance diagnostic accuracy, increase efficiency, enable remote collaboration, and ultimately improve patient care. By leveraging the Architecting Innovative Enterprise Strategy (ARIES) framework, this research provides a comprehensive analysis of the socio-technical aspects of digital pathology implementation. The study begins with a literature review on innovation and its application in healthcare,&#13;
followed by an in-depth case study of the Mayo Clinic's journey with digital pathology. Key findings highlight the importance of organizational design, stakeholder engagement, and continuous improvement in successfully integrating digital pathology into existing healthcare systems. The research concludes with recommendations for future innovations and insights on how healthcare institutions can better prepare for and adapt to disruptive technologies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration</title>
<link href="https://hdl.handle.net/1721.1/162536" rel="alternate"/>
<author>
<name>Suresh, Nithyaharini</name>
</author>
<id>https://hdl.handle.net/1721.1/162536</id>
<updated>2025-08-28T03:07:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Optimization of Renewable Energy Siting Decisions&#13;
through Vertical Axis Wind Turbine Integration
Suresh, Nithyaharini
The rapid increase in wind energy deployment is critical to achieving net-zero carbon emissions in the United States. However, conventional Horizontal Axis Wind Turbines (HAWTs) face deployment constraints due to their large spatial requirements stemming from their size itself and turbine spacing to accommodate wake interference. Their large footprint makes it impractical to deploy in densely populated and restricted areas, such as military zones and urban regions. This setback results in the underutilization of available wind resources, limiting wind energy’s full potential. To overcome these constraints, Vertical Axis Wind Turbines (VAWTs) offer a spatially compact alternative, enabling deployment in space-constrained areas. This study investigates the feasibility of VAWTs as a complementary wind technology by integrating them into a renewable energy siting optimization framework. This framework considers HAWTs, Solar Photovoltaics (PV), battery storage, etc., within the New England region, assuming a 100% decarbonized power system. The model utilizes an analysis that aims to minimize total system costs to assess VAWTs under varying capital expenditures and land-use restrictions. A novel feature of this study is the usage of the land availability cutoff and land restriction cases that are introduced to realistically mimic real-world land use constraints that influence wind turbine siting. The land availability cutoff defines the minimum area of land usable within the parcel for it to be considered for HAWTs and Solar PV deployment, given their larger spatial footprint. Parcels below this land cutoff are excluded from those technologies and only consider VAWTs due to the lower land available within the parcel, representing constrained regions. This methodology offers a more technical modeling of spatial constraints for renewable energy siting and allows for a realistic assessment of VAWT feasibility. Results indicate that, at current commercial costs, VAWTs are less competitive withm HAWTs and solar PV, primarily due to their early stage in the technology development and their significantly higher CAPEX, which is approximately ten times that of HAWTs. To test the technology’s viability with hypothetical utility-scale costs, where VAWT costs fall within the range of $1,300–$1,500/kW, the model still preferentially selects HAWTs due to their higher capacity factors. However, when the model considers different land use restriction cases for VAWT technology, as compared to HAWTs and Solar PVs, VAWTs become significantly more viable. VAWT placement becomes notable in these cases, increasing its share in the energy mix by 2.61% to 10.32% in favorable conditions. At high levels of land availability on a per-parcel scale, specifically, when more than 70% of the land identified as technically suitable remains available for any deployment, high-quality sites with favorable wind resources and high capacity factors continue to support HAWTs as the dominant technology given their lower Levelized Cost of Energy (LCOE). However, when the land availability cutoff increases beyond 70%, reducing siting opportunities for HAWTs and solar PV, the reliance is shifted towards VAWTs, amplifying the impact of their higher LCOE on overall system costs and making cost differentials between technologies more critical. These findings emphasize that while CAPEX reductions are critical in scaling VAWTs and driving up their competitiveness, land-use policies and spatial constraints are primary determinants of deployment feasibility. The study highlights the need for targeted policy intervention for flexible siting policies and continued research to optimize VAWT deployment strategies, ultimately enhancing wind energy integration in land constrained regions within New England and maximizing wind resource potential.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Targeted Codon Optimization and Translation with Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/162535" rel="alternate"/>
<author>
<name>Chemparathy, Anugrah</name>
</author>
<id>https://hdl.handle.net/1721.1/162535</id>
<updated>2025-08-28T03:07:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Targeted Codon Optimization and Translation with Deep Learning
Chemparathy, Anugrah
Codon optimization—the task of recoding a protein’s underlying DNA sequence to maximize expression in a target organism—is a complicated biological optimization problem. Each gene brings a dynamic combination of local and long-range dependencies along with globally imposed constraints specific to the organism. While most existing tools for systematic codon optimization are restricted to optimizing under the constraint of a fixed amino acid sequence, recent architectural advancements in deep learning have made it possible to introduce partial modifications to the amino acid sequence without affecting protein function during the codon optimization process. Such approaches greatly increase the search space of feasible sequences, potentially opening up pathways to previously unconsidered DNA sequences with significantly greater expression rates. In this thesis, we seek to understand and improve the inverse-folding codon optimization model CodonMPNN, the behavior and performance of which have not yet been fully evaluated. We present a detailed empirical evaluation of CodonMPNN, characterizing its performance across reconstruction and translation tasks and demonstrating that it captures higher-order codon usage patterns. We produce evidence that the CodonMPNN’s training has successfully captured nontrivial aspects of the codon distribution for 1000 unique organisms, and are able to better characterize the optimal tasks that CodonMPNN’s non-synonymous nature may be able to solve. Then, by a combination of improved pretraining and a new inference-time evolutionary algorithm we are able to modestly improve the base performance of CodonMPNN from its original publication. Together, these contributions yield a measurable improvement in CodonMPNN’s practical performance and provide actionable guidance for its application in constrained codon design. More broadly, this work highlights the importance of application-aware evaluation when deploying machine learning models in synthetic biology and motivates the design of future architectures that are better aligned with real-world usage constraints.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The First Signs of Vision</title>
<link href="https://hdl.handle.net/1721.1/162534" rel="alternate"/>
<author>
<name>Chang, Cathy</name>
</author>
<id>https://hdl.handle.net/1721.1/162534</id>
<updated>2025-08-28T03:07:26Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The First Signs of Vision
Chang, Cathy
There has been a lot of research on the evolution of eyes through the lens of biology; however, there have been a distinct lack of research in simulating what animals saw as their eyes evolved. This project aims to create interactive simulations of the evolution of animal visions from the Cambrian Explosion to present day through the use of extended reality (XR) environments. Our goal is to communicate and educate about the evolutionary timescale to help our audience understand 1) the history of vision and intelligence and 2) how vision came to be and why it is the way it is. In addition, we want to bridge the gap between technology and vision research to help people better understand and visualize this evolutionary process. We have also collaborated with the Museum of Science and the MIT Museum to display this work in events at their venues.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind</title>
<link href="https://hdl.handle.net/1721.1/162533" rel="alternate"/>
<author>
<name>Bentley, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/162533</id>
<updated>2025-08-28T03:07:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Steerability of Generative Models: Towards Bicycles&#13;
for the Mind
Bentley, Sarah
Generative models have rapidly advanced in their ability to produce diverse, high-quality outputs. Yet their practical utility often falls short: users frequently struggle to guide models toward desired outputs, even when the model is capable of producing those outputs. This thesis argues that unlocking the full potential of generative AI requires not only improving what models can produce (producibility), but also how effectively users can guide them toward producible outputs (steerability). In short, how can we make the entire producible sets of generative models easily accessible to humans? Our contributions are fourfold. First, we formally define steerability and introduce a framework for evaluating it independently of producibility. Second, we instantiate this framework through benchmarks on the steerability of text-to-image and language models. We find that not only is steerability poor, but steering doesn’t reliably improve with more attempts. Third, we propose a framework for designing and optimizing steering mechanisms – tools that help users articulate and achieve their goals with models – and introduce Reinforcement Learning for Human Steering (RLHS) to systematically optimize these mechanisms. Finally, we instantiate this framework in a new steering mechanism for image generation that enables users to steer via images rather than text prompts. This mechanism achieves over 2x improvement over traditional text-based prompting on our benchmark. Our mathematical framework provides a generalizable path forward for measuring and improving the steerability of generative models, while our implementations of that framework empirically demonstrate its utility and viability. Overall, we define a new axis – steerability – upon which we can vastly improve generative models not only as tools for automation, but as bicycles for the mind.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA</title>
<link href="https://hdl.handle.net/1721.1/162531" rel="alternate"/>
<author>
<name>Suzuki, Wataru</name>
</author>
<id>https://hdl.handle.net/1721.1/162531</id>
<updated>2025-08-28T03:07:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Safety Analysis and Design Improvement for Semi-Automatic Train Operation (STO) in High-Speed Rail Using STPA
Suzuki, Wataru
In Japan, the Tokaido Shinkansen, a major high-speed rail corridor, plans to introduce Grade of Automation 2 (GoA2) through Semi-Automatic Train Operation (STO). While partial automation promises advantages such as reduced driver’s workload and enhanced efficiency, it also creates new risks due to increasingly complex interactions among automated control systems, human operators, and physical infrastructure.&#13;
This thesis aims to systematically identify and address potential hazards arising from STO in high-speed rail. By using the Tokaido Shinkansen’s announced plan as a model case, the research seeks to uncover scenarios in which normal, non-failed system behaviors can still lead to unsafe outcomes, and to propose design solutions that mitigate those risks early in development. To achieve this, the study applies Systems-Theoretic Process Analysis (STPA). Rather than isolating hardware and function failures, STPA models the entire system as a hierarchical control structure, examining each controller’s possible unsafe actions and their feedback pathways. &#13;
The analysis reveals hazard scenarios that traditional failure-based methods might overlook. Examples include cases where a passenger is not detected between the train and platform doors at departure, or where verbal and signal instructions conflict and delay the driver’s response. These scenarios can happen even without any component failure. Drawing on these insights, the thesis recommends a variety of design improvements, such as new monitoring functions for subsystems, modifying instruction interfaces, and strengthening the software logic of automation systems.&#13;
These findings demonstrate the value of conducting a holistic safety analysis using STPA at the conceptual design stage, before late-stage changes become more expensive. Moreover, this research provides a comprehensive, system-level railway hazard analysis, and the proposed measures can be broadly applicable to high-speed rail systems with automation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps</title>
<link href="https://hdl.handle.net/1721.1/162530" rel="alternate"/>
<author>
<name>Taylor, Benjamin F.</name>
</author>
<id>https://hdl.handle.net/1721.1/162530</id>
<updated>2025-08-28T03:07:44Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Biomechanical Golf Swing Analysis using Markerless Three-Dimensional Skeletal Tracking through Truncation-Robust Heatmaps
Taylor, Benjamin F.
The efficient generation and transfer of energy in the golf swing has long been a subject of biomechanical interest, with a particular focus on the concept of the kinematic sequence, which is the coordinated segmental rotation of the pelvis, torso, arms, and club.  While previous studies have modeled aspects of this sequence using high-end laboratory setups or proprietary systems, few have provided open, quantifiable, and time-resolved measurements of angular kinematics across the full swing cycle.  This thesis seeks to address this gap by implementing a markerless temporal skeletal tracking approach built on the open-source MeTRAbs computer vision framework to model and measure joint angles and angular velocities throughout the golf swing.  Using two-dimensional video footage of right-handed golfers performing driver swings, the MeTRAbs pose estimation model and supplemental cross-frame temporal motion sequencing code were used to reconstruct three-dimensional joint trajectories and compute rotational kinematics of key body segments.&#13;
This study demonstrates the feasibility of using markerless pose estimation to extract golf swing signatures and angular velocity profiles without requiring expensive or inaccessible motion capture equipment. Preliminary analysis suggests that joint coordination patterns and temporal characteristics of body segment angular velocities may reveal quantifiable insights into the kinematic sequence, laying the groundwork for further research and instructional applications. Ultimately, this thesis contributes a replicable and cost-effective framework for analyzing golf swing biomechanics using open-source tools and computer vision.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunities in Advanced Wireless Integrated Circuits</title>
<link href="https://hdl.handle.net/1721.1/162529" rel="alternate"/>
<author>
<name>Fareed, Mo</name>
</author>
<id>https://hdl.handle.net/1721.1/162529</id>
<updated>2025-08-28T03:07:37Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Opportunities in Advanced Wireless Integrated Circuits
Fareed, Mo
The continued evolution of wireless communications, novel compact radars, and power electronics has driven demand for high-performance semiconductor materials capable of operating at higher power density, fast switching speeds, and improved efficiency. Gallium Nitride (GaN) has emerged as a leading candidate due to its superior electrical properties compared to traditional silicon (Si), silicon carbide (SiC), and gallium arsenide (GaAs). GaN’s high power density, thermal stability, and high-frequency operation make it an ideal candidate for applications in 5G/6G infrastructure, satellite communications, defense radar, electric vehicles, and power electronics. However, widespread commercial adoption of GaN faces significant barriers, including high production costs, supply chain constraints, and integration challenges within existing silicon-based fabrication processes.&#13;
&#13;
This thesis explores the opportunities and challenges associated with GaN-based integrated circuits (ICs) in the context of advanced wireless systems by utilizing Dr. Eugene Fitzgerald’s innovation framework – Technology, Markets, and Implementation (TMI). A comparative analysis of monolithic vs. board-level GaN integration is conducted. The research highlights that scaling GaN wafer production to approximately 10,000 wafers per year (200mm sized wafers) is necessary to achieve cost-effective monolithic integration, yet current defense-driven demand is insufficient to drive economies of scale. Instead, commercial applications—such as telecommunications, power electronics, and consumer RF devices—are target audiences that can take advantage of monolithic integration in high volume. &#13;
&#13;
The findings indicate that while defense applications have led non-monolithic GaN adoption (that is, discrete GaN transistor adoption), they cannot sustain large-scale production alone due to small volume. The semiconductor industry must navigate manufacturing bottlenecks, cost reduction strategies, and foundry availability to ensure GaN’s transition from a niche, high-cost technology to a commercially viable solution. By mapping the TMI intersections and addressing economic and technical barriers, this thesis provides strategic insights into how GaN technology can achieve scalable production, unlock new market opportunities, and shape the future of advanced wireless integrated circuits.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts</title>
<link href="https://hdl.handle.net/1721.1/162528" rel="alternate"/>
<author>
<name>Fontaine, Anouk</name>
</author>
<id>https://hdl.handle.net/1721.1/162528</id>
<updated>2025-08-28T03:07:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Stock-constrained design of pseudo-standard walls&#13;
from studs off-cuts
Fontaine, Anouk
The AEC industry is responsible for 40% of global greenhouse gas emissions and 38% of EU waste, much of which is landfilled. The AEC waste represents an immense portion of resources that could be used instead of new materials. Many ongoing research projects have explored ways of reusing irregular components in construction, from whole steel trusses to single elements, triangulated subparts, or even irregular wood offcuts in order to mitigate the intensive recycling and deconstruction processes. However, the research has focused on general methodologies or one-off prototypes. This paper introduces a systematic approach to repurpose discarded steel and timber studs - components that make up to 10% of waste on local sites (Parigi, 2021) - into modular, steel-frame, load-bearing walls, providing a way to build new structures for the growing global demand for housing and infrastructure, while minimizing the creation of new emissions through the use of waste elements. Through a topdown and stock-constrained design approach, geometry optimization through a matching algorithm is combined with topology optimization to generate and evaluate various configurations to minimize new emissions and maximize structural efficiency. A human-scale prototype further assesses costs, architectural and structural flexibility, construction feasibility, robotic efficiency, and embodied emissions, offering a promising pathway for sustainable construction through effective waste reuse. For the available inventory, a human-scale prototype gives data on the workflow. This approach tackles the issues of existing waste stock with the growing demand for infrastructure and minimizes embodied emissions through structurally efficient resource use by pushing forward a systematic implementation of reuse in common construction practices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems</title>
<link href="https://hdl.handle.net/1721.1/162527" rel="alternate"/>
<author>
<name>Kumar, Prashant</name>
</author>
<id>https://hdl.handle.net/1721.1/162527</id>
<updated>2025-08-28T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ensuring Security of Supply while Decarbonizing Islanded Heavy Industrial Electricity Systems
Kumar, Prashant
Electricity is set to become the central pillar of both energy production and consumption in the global effort to achieve net-zero emissions. As key sectors—transportation, chemicals, and heavy industry—seek to decarbonize by electrifying their operations, industrialized nations face mounting strain on their electricity systems. This strain is further compounded by the rising demand for electricity driven by data centers and artificial intelligence applications, heralding a future of potentially unrelenting load growth.&#13;
In such a context, it becomes not merely prudent but essential to approach decisions regard- ing investment and operation in the electricity sector with analytical rigor. Advanced capacity expansion models provide the tools for this task. In this thesis, the GenX model is employed to study Taiwan’s electricity system—an islanded, industrially-intensive grid—evaluating the evolution of its capacity mix, generation profile, prices, emissions, and overall costs.&#13;
Our findings suggest that a reliable path to decarbonization lies in a considered combination of natural gas-fired generation with carbon capture and storage (CCUS), renewable sources such as solar and wind, and energy storage systems. Furthermore, this study finds that integration of nuclear and geothermal technologies significantly improves the cost-effectiveness of achieving decarbonization targets.&#13;
This thesis also addresses the “missing money” problem endemic to energy-only electricity markets, examining how the introduction of a capacity market influences both investment and operational outcomes. We find that the efficacy of capacity markets is highly sensitive to the design parameters of the demand curve and the capacity credit values of the resources. For islanded systems such as Taiwan’s, a pragmatic approach to ensuring security of supply may involve retaining existing natural gas infrastructure as a strategic reserve, paired with a capacity market design that avoids excessive conservatism, leveraging the absence of policy interactions and competition with neighboring electricity markets, as observed in Europe.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids</title>
<link href="https://hdl.handle.net/1721.1/162526" rel="alternate"/>
<author>
<name>Anastos, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/162526</id>
<updated>2025-08-28T03:06:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Grid Enhancing Technologies: Optimization and Benefit for Distribution Grids
Anastos, Daniel
One of the largest existential challenges the US and other countries face is climate change. And maybe no other system is more crucial to combatting climate change than the grid.  Increasingly more requirements have been put onto the transmission and distribution grids to play an even larger role than they have in the past; consider AI, EV, residential solar, electrification of heat, decarbonization of buildings, increasing energy rates, old infrastructure. Improving the grid is a necessity to decarbonize and innovate. However, utilities, backed up by state regulation, usually, but not always, use traditional techniques to expand grid capacity and increase resiliency as opposed to investing in modern grid technology that would more quickly allow for future innovations and decarbonization. These technologies, or techniques, are broadly called grid enhancing technologies, or GETs. There are rational reasons why GETs are not used more often. Utilities are correctly, highly risk averse because they must safely and adequately supply power directly to people. Utilizing new technologies, even if proven, can be a risk that utilities are unwilling, or not allowed, to take given their role and responsibility. But these risks are largely avoided with the technologies discussed in this paper and one could argue these technologies could not only make the grid cheaper to expand but also give the grid more resilience. This paper explores how a particular grid section can increase its solar penetration by avoiding traditional hosting capacity limitations and use not even innovative GETs but GETs that are largely tested and proven. Traditionally, at some limit, the utility will stop allowing solar in an area due to various grid constraints. This paper explores how a utility may solve these constraints using new methods to avoid large grid expansion CAPEX costs and utilize new technologies or techniques. Some of the techniques explored here are commercial scale energy storage support at substations, PV curtailment, and volt-var optimization control.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geothermal Energy Planning Considerations for Military Operational Energy Demands</title>
<link href="https://hdl.handle.net/1721.1/162525" rel="alternate"/>
<author>
<name>Seckfort, Cody L.</name>
</author>
<id>https://hdl.handle.net/1721.1/162525</id>
<updated>2025-08-28T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Geothermal Energy Planning Considerations for Military Operational Energy Demands
Seckfort, Cody L.
Contingency locations are temporary military bases that are often established in austere or contested environments. These locations rely heavily on diesel fuel for electrical power, which creates logistical vulnerabilities and increases the risk to personnel conducting fuel resupply missions. While the Department of Defense has made progress in adopting renewable energy technologies, many of these systems remain too large, inefficient, or underdeveloped for widespread use in operational environments. Geothermal energy presents a promising but underexplored alternative for generating reliable, on-site electrical power without the need for continuous fuel resupply.&#13;
This thesis evaluates the feasibility of geothermal energy systems for military operational energy demands and introduces a modified power planning process that incorporates geothermal considerations. The research focuses on closed-loop geothermal systems, utilizing an example system called the “Mil-Loop”, which is designed to minimize the system surface footprint and support remote installations. The planning process integrates existing geothermal tools, including GEOMAP/TEST for resource estimation and GEOPHIRES for system modeling and performance analysis. The Mil-Loop System Model incorporates each step of the planning process to produce a site-specific power system profile. &#13;
A case study using site-specific data from Bagram Airfield was used to assess the performance of a hybrid geothermal-diesel power system. The results suggest that geothermal system integration could reduce diesel fuel consumption by up to 42.9 percent over a 40-year site lifecycle. A sensitivity analysis indicates that geothermal system power output, drilling time, and installation costs are the most critical parameters affecting system viability. Advances in drilling technology and heat extraction have the potential to reduce installation costs and timelines, making geothermal more competitive with diesel generation. This thesis also identifies a gap in military energy planning resources, specifically the lack of frameworks that include geothermal options for operational environments. It recommends that the DoD begin integrating geothermal technologies into its energy planning strategies and develop modular systems that can be deployed in contested or resource-constrained areas. &#13;
While this research is limited by simplified power demand modeling and generalized tool assumptions, it offers a practical framework for evaluating geothermal viability in future defense applications. This study demonstrates that geothermal energy systems, particularly closed-loop configurations, can serve as a viable and strategically beneficial power source for military operations. When paired with targeted technology development and thoughtful integration into planning processes, geothermal systems can reduce logistical burdens, improve energy resilience, and enhance mission success in operational environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles</title>
<link href="https://hdl.handle.net/1721.1/162524" rel="alternate"/>
<author>
<name>Balla, Sai Prasad</name>
</author>
<id>https://hdl.handle.net/1721.1/162524</id>
<updated>2025-08-28T03:07:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Levelized Cost of Fuel (LCOF) studies for microreactors using TRISO&#13;
fuel in hydride and beryllium-based composite moderators in open&#13;
and closed fuel cycles
Balla, Sai Prasad
This study provides a comprehensive techno-economic evaluation of a specific class of nuclear batteries—high-temperature gas-cooled 10 MW_th microreactors (HTGRs) with TRISO fuel in prismatic- and pebble-bed cores—using four composite moderator concepts (MgO–Be, MgO–BeO, MgO–YH, MgO–ZrH). These options are compared against a prismatic graphite benchmark, under both once-through and continuous-recycle fuel cycles.&#13;
&#13;
In once-through prismatic systems, hydride-based moderators can reduce overall fuel-cycle costs by up to about 20% relative to graphite, whereas beryllium-based moderators may remain 40–50% costlier due to higher raw material expenses. Shifting from prismatic blocks to pebble beds decreases moderator usage and increases burnup, thus making advanced moderator options more competitive. &#13;
&#13;
Adopting a continuous-recycle strategy replaces enrichment with reprocessing and can further lower fuel-cycle costs by roughly 30%. Coupling a sodium-cooled fast reactor (SFR) to supply transuranic’s further reduces the cost: SFR driver fabrication and reprocessing can account for the bulk of total costs, rendering microreactor-level variations comparatively minor. Meanwhile, pebble-bed designs propose ultra-high burnups and extended residence times, which could yield significant economic gains, contingent on demonstrated long-term TRISO fuel integrity.&#13;
&#13;
Waste handling also factors into the analysis. Deconsolidation—removing the inert moderator before disposal—can shrink spent-fuel volumes by more than 90%, easing repository demands. Continued R&amp;D into advanced additive manufacturing, high-burnup TRISO performance, and streamlined waste management will be crucial for capitalizing on these potential cost advantages.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation</title>
<link href="https://hdl.handle.net/1721.1/162523" rel="alternate"/>
<author>
<name>Bhatia, Jagdeep Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/162523</id>
<updated>2025-08-28T03:07:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Robust Dexterous Manipulation Enabled by Learning at Scale inSimulation
Bhatia, Jagdeep Singh
Robots with robust bimanual dexterity have the potential to transform industries such as manufacturing and healthcare by performing complex tasks at human-level proficiency. While end-to-end learning methods have shown promise in achieving this goal, scaling these approaches remains challenging. Existing paradigms suffer from high costs associated with collecting large-scale, high-quality demonstrations on physical systems and face performance saturation due to reliance on offline data. We propose a task-agnostic pipeline that leverages robotics simulation to overcome these limitations. In particular, we introduce DART, a cost-effective, augmented reality, robot teleoperation platform for scalable data collection. We demonstrate through user study that it enables twice the throughput of existing systems. We also present a learning algorithm that integrates real-world demonstrations with reinforcement learning to surpass performance plateaus. Finally, we design a method that zero-shot transfers policies trained in simulation on real robots using only RGB input. Together, these contributions provide a practical and scalable path toward achieving general-purpose dexterous robot manipulation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Image Registration and Gantry Tracking System of Clytia hemisphaerica</title>
<link href="https://hdl.handle.net/1721.1/162522" rel="alternate"/>
<author>
<name>Bunch, Bradley</name>
</author>
<id>https://hdl.handle.net/1721.1/162522</id>
<updated>2025-08-28T03:08:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Image Registration and Gantry Tracking System of Clytia hemisphaerica
Bunch, Bradley
Understanding nervous system function and evolution requires detailed behavioral analysis of model organisms such as the jellyfish Clytia hemisphaerica. However, its size and rapid, free-swimming nature pose significant tracking challenges. This work presents a platform for the XY gantry system developed to overcome these hurdles for high-resolution behavioral monitoring. Separately, to prepare for downstream neural analysis, we developed an automated neuron segmentation pipeline - tailored for image registration purposes. Together, the tracking system and the analysis preparation pipeline provide powerful, distinct tools for high-throughput behavioral quantification and facilitate future studies linking behavior to underlying neural dynamics in Clytia hemisphaerica.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin Technology Applied to Automotive Diagnostics</title>
<link href="https://hdl.handle.net/1721.1/162521" rel="alternate"/>
<author>
<name>Mwarage, Jessy Mbagara</name>
</author>
<id>https://hdl.handle.net/1721.1/162521</id>
<updated>2025-08-28T03:07:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Digital Twin Technology Applied to Automotive Diagnostics
Mwarage, Jessy Mbagara
There is currently a lot of interest in the area of Digital Twin (DT) Technology. Physical product oriented organizations are increasingly looking for ways to stay ahead of the technological innovation curve in order to not get disrupted by more agile entrants. Therefore, the promise of a technology like DT is alluring for the sake of maintaining a competitive edge. This thesis seeks to explore the potential benefits of DT technology alongside what challenges might be faced in implementing one. To this end, a problem statement is formulated in the field of automotive diagnostics. This is a key value addition field for automotive companies seeking to better manage the diagnosis and repair of their automobiles in the field or the manufacturing environment. The problem is further concretized with a study of some user-driven use cases and needs in a real automotive company. From these needs, a set of requirements is formulated to guide the architecture and design of a DT demonstration. The process of architecting and designing the DT is documented. This includes a deep dive on the modeling approaches considered, the solution space for the architecture and the detailed design and implementation of a DT demonstration from a selected architectural concept. The DT demonstration is then operated under controlled conditions in order to showcase some of its capabilities. Pursuant to all this, a reflection on the effectiveness of the demonstration and the lessons learned about the implementation process are discussed. The results of the study and demonstration show some promise for organizations seeking to adopt DT technology, in this particular case for automotive diagnostics. The benefits are mainly in terms of better system architecture  planning and the increased potential for better incorporating lessons learned from products operating in the field back into the design process. These benefits are weighed against the socio-technical challenges of implementing DTs from the outset of a system design exercise.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization</title>
<link href="https://hdl.handle.net/1721.1/162520" rel="alternate"/>
<author>
<name>Wang, Zach</name>
</author>
<id>https://hdl.handle.net/1721.1/162520</id>
<updated>2025-08-28T03:07:38Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data Acquisition for Enhancing Human-Informed Topology&#13;
Optimization
Wang, Zach
This thesis presents a survey application designed for the future development of HumanInformed Topology Optimization (HiTop) towards the deeper integration of optimization and real-world feasibility. Topology optimization produces high-performance designs by optimally distributing material, but its application in professional environments remains limited due to fabrication constraints and inflexible design workflows. To address this, the Carstensen Group developed HiTop, which integrates optimization algorithms with human experience, allowing engineers to modify the computer design based on their professional judgment. Thus, the future development of HiTop requires real-world data on human preferences. This project introduces a web-based survey app integrated with Qualtrics. It presents users with various design scenarios and computer-optimized designs, and records their modifications and reasoning. A preliminary survey collected responses from 13 professionals and engineering students. Preliminary findings suggest that engineers consistently focus on similar regions of interest, even when motivated by different reasons. However, the sample size is too small to make any statistically significant conclusions. While the platform mostly performed as intended, a bug related to data storage was discovered during analysis. The issue has since been resolved, and the tool is now fully functional and ready for broader deployment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process</title>
<link href="https://hdl.handle.net/1721.1/162519" rel="alternate"/>
<author>
<name>Lauber, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162519</id>
<updated>2025-08-28T03:07:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating Motivational Drivers of Participation in W3C’s Web Standards Development Process
Lauber, Emily
This research investigates the motivational drivers for companies and individuals to participate in the World Wide Web Consortium’s Web standards development process. Motivational drivers are identified through a literature review, primary sources, and interviews. Thirteen semi-structured interviews were conducted with questions related to participants’ experience with the World Wide Web broadly, Web standards in general, the organization of W3C, and game modeling of the process. W3C was selected as the case study of Web-related standards bodies because of its unique model of paid membership yet open standards available royalty-free. The W3C standards process requires consensus-building, horizontal review, and proof of implementation before the organization officially recommends the specification. Existing research documents the history and value of standardization across industries, the modeling of various Standards Development Organizations (SDOs) in information industries, and the negotiation of international Internet governance. This thesis does not attempt to prove a societal benefit of Web standards but instead focuses on an individual’s belief in societal benefit and how that belief drives their engagement with W3C.&#13;
&#13;
Initial findings point to members seeking economic, philosophic, and moral value through participation in Web standards development. A game theory framework evaluates the economic value of different players within the ecosystem and identifies that Web browser vendors and long-time consortium members have greater power for their preferred specification outcomes than Web developers or newcomers. Despite changes in the Web ecosystem in the past 30 years, W3C members continue to be drawn to the Web for the same philosophical intents that Sir Tim Berners-Lee designed the Web for. There are shared concerns though that the economic power players identified in the game modeling has damaged or will threaten the philosophy of an open, safe, accessible Web. Interviewees shared personal beliefs that there was a moral responsibility to engage in Web standards development and enable W3C’s mission of “empowering humanity”. Further research is required to catalogue more motivational drivers, evaluate drivers across other Web-related Standards Development Organizations, and rank the priority of motivations when the different drivers are in tension.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems</title>
<link href="https://hdl.handle.net/1721.1/162518" rel="alternate"/>
<author>
<name>Putnam, Rachael M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162518</id>
<updated>2025-08-28T03:07:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Generation of Pareto-Optimal Perception Architectures for Autonomous Robotic Systems
Putnam, Rachael M.
Designing perception systems for autonomous robots and vehicles requires balancing sensor performance against cost, complexity, and integration constraints. This thesis introduces GO4R (Generation and Optimization of Perception System Architectures for Robotics), a multi-objective framework that jointly designs sensor selection, placement, against volumetric, entropy-based utility metric H (-) and monetary cost M ($). Perception Entropy H is formalized as a volumetric measure of uncertainty across a voxelized regions of interest (ROI), which naturally rewards coverage, overlap, and redundancy required for robust sensor fusion and calibration.&#13;
&#13;
NSGA-II is implemented with custom mixed-variable operators to specifically handle both continuous (e.g. sensor poses) and discrete (e.g. sensor type/count) decision variables found in this problem. Two case studies, long-range outdoor navigation on a Clearpath Jackal and short-range indoor navigation on ANYmal-C, demonstrate the framework’s ability to generate Pareto-optimal sensor architectures under vastly different ROI definitions and operating conditions. In the Jackal study, GO4R converges to a population of 11 novel Pareto-optimal designs, and revealing sensitivity to voxel size and importance weighting. In the ANYmal-C study, the compact, uniformly weighted ROI yields a flatter Pareto front with 25 Pareto-optimal designs, and underscores how intrinsic sensor parameters (e.g. angular resolution, and Field of View) dominate design trade-offs when baseline coverage is already high.&#13;
&#13;
Key architectural decisions are analyzed, quantified by their impact on Pareto front shape, and ordered according to the GO4R method to successively reduce uncertainty. The resulting guidelines provide practitioners with a rigorous, reusable process for tailoring perception systems to task-specific requirements. Finally, GO4R provides a publicly available NVIDIA Isaac Sim extension to aid practitioners in following the GO4R method, no matter their Autonomy application. Future work will extend GO4R to dynamic environments, improve fidelity of generated designs, and incorporate additional cost metrics such as computational load and maintainability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale</title>
<link href="https://hdl.handle.net/1721.1/162517" rel="alternate"/>
<author>
<name>Shao, Yu-Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/162517</id>
<updated>2025-08-28T03:07:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Agricultural Waste Utilization: Life Cycle Assessment for Selecting Carbon-Management Best Practices on a Global Scale
Shao, Yu-Tong
Crop residues are a widely available form of agricultural waste with several possible reuse applications, including use as biofertilizers, animal feed, biofuels, and for carbon sequestration. However, in many parts of the world, large quantities of these residues are still burned in the field, releasing significant amounts of greenhouse gases (GHGs) and air pollutants to the atmosphere. This study aims to evaluate alternative and carbon-efficient strategies for reusing crop residues – especially focusing on rice straw and wheat straw – by conducting life cycle assessments (LCA) of multiple utilization pathways. Different alternative scenarios for utilizing crop residues are assessed: incorporating residue in field, animal usage for feeding, pyrolysis for electricity generation, pyrolysis for carbon sequestration, and electricity generation through residue combustion. Specifically, for the scenarios of pyrolysis and electricity generation through residue combustion, the maximum feasible transport distances of crop residues from agricultural fields to processing facilities are modeled for different logistics methods, providing information for the locations for establishing centralized facilities while maintaining GHG benefits for the scenarios. The results of this study highlight that electricity generation using crop residues, either through pyrolysis or direct residue combustion, offers the greatest climate benefits among all evaluated options. Carbon sequestration through pyrolysis also yields substantial GHG reductions, although slightly lower than the benefits from electricity generation. While crop residue-based electricity emits 4.35 to 31.25 times more GHGs per unit of electricity generated than renewable sources and 50.00 to 67.57 times more than nuclear sources, it still performs better than fossil fuels and provides added value in terms of agricultural waste management, resulting in 30.56 to 66.67% lower GHG emissions. Moreover, transportation emissions account for only a small share of the total life cycle global warming potential (GWP) in the electricity generation scenarios, ranging from 0.66% (via ships) to 16.40% (via trucks) for every 1000 km traveled. This makes long-distance residue transport viable. The findings of this study underscore the potential for crop residues to play a meaningful role in climate mitigation and sustainable agricultural waste management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Roadmapping and Technology Portfolio Selection for Heating Decarbonization in Canada</title>
<link href="https://hdl.handle.net/1721.1/162516" rel="alternate"/>
<author>
<name>Shalash, Karim</name>
</author>
<id>https://hdl.handle.net/1721.1/162516</id>
<updated>2025-08-28T03:07:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Strategic Roadmapping and Technology Portfolio Selection for Heating Decarbonization in Canada
Shalash, Karim
Heating systems contribute significantly to Canada’s greenhouse gas emissions, accounting for approximately 117 megatons of CO₂ equivalent. demanding urgent decarbonization to meet national climate targets. This thesis employs the Advanced Technology Roadmap Architecture framework, integrating strategic roadmapping and technology portfolio selection methodologies to evaluate pathways for transitioning Canada’s heating sector to net-zero emissions by 2050. By analyzing historical emissions, forecasting adoption trends for key technologies like heat pumps, and conducting stakeholder-driven scenario analysis, this research identifies critical barriers to scaling low-carbon solutions, including high upfront costs, infrastructural limitations, and regional climatic constraints. &#13;
Seven representative heating architectures—air-source heat pumps, ground-source heat pumps, district heating, hydrogen-based systems, electric resistive heating, and conventional gas-fired furnaces—are evaluated comprehensively. Among these, district heating is particularly emphasized due to its potential for significant emissions reductions and minimal consumer-bearing initial cost of ownership, especially when strategically integrated with waste heat recovery from data centers. This integration utilizes otherwise wasted thermal energy, creating a robust symbiotic opportunity for urban and industrial decarbonization. &#13;
To support the practical deployment of these architectures, the thesis establishes a targeted technology portfolio comprising essential enabling and supporting technologies. Enabling technologies include centralized supervisory control systems, urban-scale district heating networks, inverter-driven compressors, advanced refrigerants, ground heat exchangers, and circulation pumps with variable frequency drives. Critical supporting technologies identified encompass building information modeling integration kits, cybersecurity modules, digital permitting platforms, smart thermostats, and thermal energy storage systems, among others. &#13;
This thesis further explores technology trade-offs, focusing on structural complexity, technology readiness, and associated risks of deployment. Through detailed modeling and stakeholder-informed scenario analysis, the thesis concludes that effective decarbonization of heating in Canada necessitates substantial policy interventions, robust financial incentives, targeted infrastructure investments, and region-specific strategies. The analysis indicates that a carefully allocated $8 billion catalyst investment could close approximately 60% of Canada’s heating emissions gap by 2050. Ultimately, district heating coupled with waste heat recovery emerges as a particularly promising strategic option, underscoring its transformative potential within a diversified approach to achieving Canada’s sustainable heating future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity and Prenatal Care: Geographic Accessibility to Healthcare Facilities in N’Djamena, Chad</title>
<link href="https://hdl.handle.net/1721.1/162515" rel="alternate"/>
<author>
<name>Alkhalil, Kabbod</name>
</author>
<id>https://hdl.handle.net/1721.1/162515</id>
<updated>2025-08-28T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Proximity and Prenatal Care: Geographic Accessibility to Healthcare Facilities in N’Djamena, Chad
Alkhalil, Kabbod
Access to prenatal care is critical for reducing maternal and neonatal mortality rates. Yet, accessibility to healthcare facilities remains an understudied challenge in many sub-Saharan African countries. This study examines the spatial accessibility to healthcare facilities in N’Djamena, Chad, across various transportation modes, as well as the relationship between travel time and adherence to WHO-recommended prenatal care visits.&#13;
This analysis utilized a mixed-methods approach. A geospatial analysis was conducted to estimate travel times and distances to the nearest healthcare facility across the city of N’Djamena using various transportation modes to uncover areas of low accessibility. This analysis was supplemented with survey data collected from interviews with 67 pregnant women across three different hospitals.&#13;
Findings show that 72% of the surveyed population use motorcycles or cars and benefit from high accessibility. 95% of these patients have travel times under 26 and 30 minutes, respectively. In contrast, pedestrians have poor accessibility, especially when patients only attend district or national hospitals. This behavior is very likely – 81% of the surveyed population reported bypassing closer facilities, advancing familiarity and quality of care as the main reasons. In this instance, 20% of the population have travel times greater than one hour on foot. &#13;
While adherence to WHO guidelines was high in early pregnancy (below 20 weeks), it declined in later stages. The study found no statistically significant correlation between travel time and adherence.&#13;
Improving accessibility for pedestrians will require a combination of health system improvements, better facility distribution, and transport subsidies. The Ministry of Public Health and urban planners can employ similar data-driven approaches to plan the placement of new healthcare facilities and develop outreach strategies to ensure equitable access in a growing urban context.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proof-of-Work Mitigation Strategy for DNS-Based&#13;
Amplification Attacks</title>
<link href="https://hdl.handle.net/1721.1/162514" rel="alternate"/>
<author>
<name>Bansal, Umang</name>
</author>
<id>https://hdl.handle.net/1721.1/162514</id>
<updated>2025-08-28T03:07:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Proof-of-Work Mitigation Strategy for DNS-Based&#13;
Amplification Attacks
Bansal, Umang
Distributed Denial of Service attacks, and particularly DNS Amplification attacks, have seen a steady rise in deployment over the past few decades. DNS Amplification attacks, in particular, are challenging to identify and mitigate because of their apparent similarity to legitimate DNS traffic. This thesis proposes a new Proof-of-Work mitigation strategy that provides a defense against DNS Amplification attacks and shifts the burden of mitigation to the attackers. Through our experiments, we show that our Proof-of-Work strategy is effective in reversing the impact of DNS Amplification attacks on the victim’s ability to service legitimate clients. We also provide an evaluation framework to evaluate the mitigation strategy’s impact on the victim’s quality of service.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Human-Informed Variables in Medical Data</title>
<link href="https://hdl.handle.net/1721.1/162513" rel="alternate"/>
<author>
<name>Abu Daoud, George</name>
</author>
<id>https://hdl.handle.net/1721.1/162513</id>
<updated>2025-08-28T03:07:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Modeling Human-Informed Variables in Medical Data
Abu Daoud, George
In the Age of Information and Artificial Intelligence, data plays a major role in analyzing and understanding underlying trends and patterns as well as informing processes and operations. Medical data often captures information beyond mere patient conditions and state, but also human behavioral aspects of the medical process, affecting the data itself and the decisions informed by it. Modeling these variables could help us understand how they influence decisions in the field and potentially augment our models for better and more nuanced predictions. In the first study, we look into how external non-medical factors might affect decision-making by investigating the effect of 30-day mortality metrics on discharge rates following surgeries in Cardio-Vascular Intensive Care Units (CVICU) using data from the MIMIC-IV dataset. In the second study, we examine data extraction from human-notes for enhancing organ procurement decision processes.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soil Moisture Dynamics and Thresholds for Surface Energy Balance Regime Transitions: An Observational Analysis at a U.S. Grassland Site</title>
<link href="https://hdl.handle.net/1721.1/162512" rel="alternate"/>
<author>
<name>Verensia, Ria</name>
</author>
<id>https://hdl.handle.net/1721.1/162512</id>
<updated>2025-08-28T03:07:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Soil Moisture Dynamics and Thresholds for Surface Energy Balance Regime Transitions: An Observational Analysis at a U.S. Grassland Site
Verensia, Ria
Understanding how soil moisture declines following rainfall—when the soil progressively dries due to evaporation and plant uptake—is critical for assessing plant water stress, surface energy partitioning, and land–atmosphere interactions. These periods of moisture loss, commonly referred to as soil moisture drydowns, provide a valuable window into the transition from wet to dry surface conditions. This study focuses on the critical soil moisture threshold (θ*), which marks the transition from energy- water-limited surface evaporation regimes. This transition reflects a key shift in surface energy balance and controls the extent to which evaporation is constrained by moisture availability. While previous research has typically treated θ* as a static value based on soil texture, emerging evidence suggests that it may vary depending on environmental conditions, particularly seasonal climate. This study investigates whether θ* is a fixed property or a dynamic threshold influenced by seasonal variation and available energy. Using in situ data from the Soil Temperature and Moisture Profile (STAMP) system and Infrared Thermometer (IRT) measurements at a semi-arid grassland site in Oklahoma, USA, I identify and analyze soil moisture drydown events. I estimate θ* by applying piecewise linear regression to the relationship between soil moisture and diurnal surface temperature range, isolating the breakpoint that indicates the transition from energy-limited to water-limited evaporation. Results reveal that θ* exhibits systematic temporal variations, particularly across seasons and temperature regimes, suggesting that surface temperature dynamics during drydowns are most likely a response to changes in soil moisture content. These findings challenge the assumption that θ* is solely texture-dependent and highlight the need to account for dynamic environmental controls in modeling surface energy exchange. This research provides new insights into soil moisture-temperature coupling and offers implications for land surface model development, drought forecasting, and vegetation response assessments under a changing climate.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MOBLLM: Model Building LLMs via Symbolic Regression and Experimental Design</title>
<link href="https://hdl.handle.net/1721.1/162509" rel="alternate"/>
<author>
<name>Binbas, Berkin</name>
</author>
<id>https://hdl.handle.net/1721.1/162509</id>
<updated>2025-08-28T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">MOBLLM: Model Building LLMs via Symbolic Regression and Experimental Design
Binbas, Berkin
Large language models (LLMs) have recently emerged for daily use and have already been extensively utilized for various tasks. They are shown to be able to carry out more and more complex tasks every day, including those that require a high level of formal/mathematical reasoning at human or superhuman levels. In particular, their in-context learning capabilities and the domain-specific knowledge they have via their vast pretraining corpus, as well as their fine-tunability for specific tasks drove a lot of attention and research in the field. However, applications of LLMs to the frontiers of scientific research remains an underexplored direction. In this work, we investigate how one can leverage LLMs to aid with building compact mathematical models and experimental design. Specifically, we propose a framework for using LLMs as a guide to concurrently handle the experimental design and symbolic regression tasks for data obtained from 1) a black box 1D function and 2) a black box physical system. We propose further modifications to our base framework, and perform experiments to analyze how it performs under different experiment variants, across different LLM tiers. Our experiments reveal that while larger models (of around 70b parameters) do not always achieve better downstream performance compared to smaller models (of around 8b parameters), they are able to utilize the given information and/or physical context when designing experiments and proposing symbolic expressions, and perform better than random-design baselines. We also observe that natural language constraints do not consistently improve symbolic regression accuracy. These results underscore both the challenges and the potential of integrating LLM agents into the scientific discovery process, particularly as proposers of experiments and symbolic expressions.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>I-Con: A Unifying Framework for Representation Learning</title>
<link href="https://hdl.handle.net/1721.1/162508" rel="alternate"/>
<author>
<name>Alshammari, Shaden</name>
</author>
<id>https://hdl.handle.net/1721.1/162508</id>
<updated>2025-08-28T03:07:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">I-Con: A Unifying Framework for Representation Learning
Alshammari, Shaden
As the field of representation learning grows, there has been a proliferation of different loss functions to solve different classes of problems. We introduce a single information-theoretic equation that generalizes a large collection of modern loss functions in machine learning. In particular, we introduce a framework that shows that several broad classes of machine learning methods are precisely minimizing an integrated KL divergence between two conditional distributions: the supervisory and learned representations. This viewpoint exposes a hidden information geometry underlying clustering, spectral methods, dimensionality reduction, contrastive learning, and supervised learning. This framework enables the development of new loss functions by combining successful techniques from across the literature. We not only present a wide array of proofs, connecting over 23 different approaches, but we also leverage these theoretical results to create state-of-the-art unsupervised image classifiers that achieve a +8% improvement over the prior state-of-the-art on unsupervised classification on ImageNet-1K. We also demonstrate that I-Con can be used to derive principled debiasing methods which improve contrastive representation learners.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System thinking to analyze the Market penetration of Two-Wheeled vs Four-Wheeled EVs in India</title>
<link href="https://hdl.handle.net/1721.1/162507" rel="alternate"/>
<author>
<name>Kumbhare, Piyush</name>
</author>
<id>https://hdl.handle.net/1721.1/162507</id>
<updated>2025-08-28T03:07:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">System thinking to analyze the Market penetration of Two-Wheeled vs Four-Wheeled EVs in India
Kumbhare, Piyush
This thesis analyzes the disparate market penetration rates of electric two-wheelers (E2Ws) and electric four-wheelers (E4Ws) in India, using systems thinking approaches to understand the underlying dynamics and propose strategic interventions. In 2024, while E2Ws have achieved 4.43% market penetration, E4Ws lag significantly at 1.91%, despite similar policy support. Through force field analysis and stakeholder value mapping, this research identifies key factors driving this disparity and evaluates their temporal evolution over three time horizons.&#13;
The analysis reveals that E2Ws benefit from stronger driving forces, including urban suitability, favorable total cost of ownership, and simpler charging solutions, with 91% of users relying on home charging. In contrast, E4Ws face more substantial barriers, particularly in upfront costs, charging infrastructure requirements, and range anxiety. Technical modeling of key Figures of Merit (FOMs) demonstrates how different optimization challenges affect each segment's market acceptance.&#13;
The research culminates in recommendations for accelerating E4W adoption, emphasizing the need for India-specific models priced similar to internal combustion engine (ICE) vehicle, localized manufacturing ecosystems, robust charging infrastructure, and innovative financing solutions. The findings suggest that while E2W adoption will continue to grow naturally, E4W penetration requires coordinated interventions across manufacturing, technology, infrastructure, policy, and consumer awareness dimensions. This research contributes to understanding how systems thinking can inform strategic planning for electric vehicle adoption in emerging markets, with specific implications for India's goal of 30% EV penetration by 2030.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Generative Multi-Agent Systems for Collective Intelligence and Resilience</title>
<link href="https://hdl.handle.net/1721.1/162506" rel="alternate"/>
<author>
<name>Dao, Nguyen Luc</name>
</author>
<id>https://hdl.handle.net/1721.1/162506</id>
<updated>2025-08-28T03:07:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Designing Generative Multi-Agent Systems for Collective Intelligence and Resilience
Dao, Nguyen Luc
Large Language Models (LLMs) have been increasingly adopted by businesses to support their workflows, driving significant investment in developing generative agents. These agents can collaborate and exchange information to solve complex problems. Previous research has found that the benefits of such multi-agent systems include better performance and the potential emergence of collective intelligence characterized functionally as leadership, debate, and feedback. However, expanding multi-agent systems to include agents beyond trusted boundaries introduces the risks of malicious agents that provide incorrect or harmful information to deteriorate collective decisions or cause systemic failure. This study investigates how architectural decisions, including group size, agent prompting, and collaboration schemes, impact the system's resilience against malicious agents. Our experiment results show that increasing group size improves both accuracy and resilience at the cost of more tokens. Step-back abstraction prompting enhances accuracy and mitigates the likelihood of hallucinations induced by malicious agents. Group Chat topology is highly vulnerable to malicious interferences. Reflexion, Crowdsourcing, and Blackboard topologies offer safeguards against such risks. Eventually, we expand our research to investigate accountability gaps in generative AI systems. Designing generative multi-agent systems requires careful consideration of the trade-offs between performance, cost, resilience, and accountability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>National Space Power Analysis Through Organizational and Market Evolution</title>
<link href="https://hdl.handle.net/1721.1/162505" rel="alternate"/>
<author>
<name>Deline, Carrie B.</name>
</author>
<id>https://hdl.handle.net/1721.1/162505</id>
<updated>2025-08-28T03:07:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">National Space Power Analysis Through Organizational and Market Evolution
Deline, Carrie B.
The space domain is undergoing fundamental changes and unprecedented growth. Once dominated by state-led missions, the space sector is now home to commercial competition, rapid innovation, and evolving models of public-private collaboration. These changes call to question how space power is built and maintained, especially during the raising geopolitical climate and power competition in space. The rise of agile commercial industry has driven down launch costs, accelerated technology development and opened new markets and business cases forcing legacy institutions to re-evaluate their strategies and business models.&#13;
&#13;
This thesis is motivated by the need to understand how organizations are responding to these changes, and how their choices collectively shape the United States as a national space power. Through the application of a theoretical space power model based on war strategy and Schumpeterian innovation theory, the different elements of space power will be explored in today’s context. It seeks to identify the organizational drivers of change, tensions and synergies between legacy enterprises and new entrants, and the implications of the dynamic space ecosystem.&#13;
&#13;
This thesis includes a mixed-methods analysis starting with a historical understanding of the evolution of the sector. By identifying current market trends, government policies and initiatives, the applied theoretical model is presented. The model is supported by market data, a force field analysis of organizational shifts, and qualitative interview insights from industry leaders. The research aims to contribute insights for government strategists and industry leaders concerned with America’s future as a space power and their organization’s role within it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Noisy with a Chance of Mislabels: A Local and Training Dynamics Perspective on Detecting Label Noise in Deep Classification</title>
<link href="https://hdl.handle.net/1721.1/162504" rel="alternate"/>
<author>
<name>Chentouf, A. Anas</name>
</author>
<id>https://hdl.handle.net/1721.1/162504</id>
<updated>2025-08-28T03:07:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Noisy with a Chance of Mislabels: A Local and Training Dynamics Perspective on Detecting Label Noise in Deep Classification
Chentouf, A. Anas
Noisy labels are a pervasive challenge in modern supervised learning, especially in highstakes domains such as healthcare, where model reliability is critical. Detecting and mitigating the influence of mislabeled data is essential to improving both performance and interpretability. Building on insights from training dynamics, we propose Local Consistency across Training Epochs (LoCaTE), a class of data-filtering methods that leverages over-parameterized and over-trained neural networks to distinguish clean samples from mislabeled ones. Our approach integrates both local neighborhood information and the behavior of samples across training epochs to identify noise and enhance model robustness. We evaluate our method on real (human) and synthetic label noise across three classification datasets, finding that it achieves competitive F₁ of label error detection and improved downstream accuracy using a lightweight classifier with low added computational cost.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Multimodal Interactions through Improved Partial Information Decomposition Estimation</title>
<link href="https://hdl.handle.net/1721.1/162503" rel="alternate"/>
<author>
<name>Balachandran, Adithya S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162503</id>
<updated>2025-08-28T03:07:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Multimodal Interactions through Improved Partial Information Decomposition Estimation
Balachandran, Adithya S.
Multimodal AI aims to build comprehensive models by integrating information from diverse sensory inputs such as text, audio, and vision. However, significant challenges remain in understanding how these different modalities interact and contribute to downstream tasks. In particular, we seek to characterize how modalities complement each other, overlap in the information they convey, or contribute jointly to patterns that are not clear from any single modality alone. To address this, we propose novel methods for quantifying these multimodal interactions using information-theoretic techniques. Specifically, we will introduce a novel estimator for Partial Information Decomposition (PID) using normalizing flows, with the ability to scale well to high-dimensional data. We also develop a new framework for estimating pointwise PID, which provides insights into how individual data points contribute to information sharing and interactions across modalities, and show how to apply this framework for anomaly detection. We demonstrate the effectiveness of our methods on a variety of high-dimensional datasets, including both synthetic and real-world multimodal data such as videos.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explanation Alignment: Quantifying the Correctness of&#13;
Model Reasoning At Scale</title>
<link href="https://hdl.handle.net/1721.1/162502" rel="alternate"/>
<author>
<name>Bang, Hyemin</name>
</author>
<id>https://hdl.handle.net/1721.1/162502</id>
<updated>2025-08-28T03:07:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Explanation Alignment: Quantifying the Correctness of&#13;
Model Reasoning At Scale
Bang, Hyemin
To improve the reliability of machine learning models, researchers have developed metrics to measure the alignment between model saliency and human explanations. Thus far, however, these saliency-based alignment metrics have been used to conduct descriptive analyses and instance-level evaluations of models and saliency methods. To enable evaluative and comparative assessments of model alignment, we extend these metrics to compute explanation alignment — the aggregate agreement between model and human explanations. To compute explanation alignment, we aggregate saliency-based alignment metrics over many model decisions and report the result as a performance metric that quantifies how often model decisions are made for the right reasons. Through experiments on nearly 200 image classification models, multiple saliency methods, and MNIST, CelebA, and ImageNet tasks, we find that explanation alignment automatically identifies spurious correlations, such as model bias, and uncovers behavioral differences between nearly identical models. Further, we characterize the relationship between explanation alignment and model performance, evaluating the factors that impact explanation alignment and how to interpret its results in-practice.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Octavio: A Distributed System for the Sensing, Storing,&#13;
and Retrieval of Piano Playing Data</title>
<link href="https://hdl.handle.net/1721.1/162501" rel="alternate"/>
<author>
<name>Abdulrezak, Ayyub</name>
</author>
<id>https://hdl.handle.net/1721.1/162501</id>
<updated>2025-08-28T03:07:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Octavio: A Distributed System for the Sensing, Storing,&#13;
and Retrieval of Piano Playing Data
Abdulrezak, Ayyub
MIT has a wealth of pianos spread across its campus. These instruments are owned by various groups and MIT organizations. Every day, students, faculty, and extended members of the MIT community play and practice with them. However, there currently exists no available data on their usage. This project aims to create the infrastructure for capturing this data. To this end, we installed sensing equipment on pianos across campus, constructed a matching database and filesystem of all playing sessions across time, and established a public API for the retrieval of this data. The collected data will later be used to power a publicly accessible webpage of real-time and historical visualizations, as well as serve to bolster research efforts into the characteristic piano playing of MIT.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy and Technical Recommendations for Integrating&#13;
Autonomy into Military Offensive Cyberspace Operations</title>
<link href="https://hdl.handle.net/1721.1/162449" rel="alternate"/>
<author>
<name>Wettstein, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/162449</id>
<updated>2025-08-22T03:06:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Policy and Technical Recommendations for Integrating&#13;
Autonomy into Military Offensive Cyberspace Operations
Wettstein, Benjamin
As AI technologies and autonomy grow, the transition to application in the military, specifically enhancing military cyberspace operations, has become both a strategic imperative and an adoption challenge. This thesis explores the challenge of effectively integrating autonomous cyber weapons systems into offensive military cyberspace operations. I offer both technical and policy recommendations to ensure autonomous technology development does not outpace its ability to be integrated. &#13;
&#13;
This thesis analyzes historical case studies, such as loitering munitions and escort jammers, to examine the potential for integrating autonomous cyber weapons systems into military offensive cyberspace operations. This analysis finds that the more autonomous and lethal a weapon is, the more difficult it is to integrate it into military operations.&#13;
&#13;
Subsequently, the current state of cyberspace operations is analyzed by discussing two cyberspace attacks, Stuxnet and Conficker. This analysis reveals that cyberspace operations currently demonstrate low to medium levels of autonomy and low levels of lethality. Therefore, there is a significant opportunity to adopt autonomous systems in the current context of offensive cyberspace operations. However, as the domain of cyberspace is transforming with the growth of complexity in technology, there are evolving legal, ethical, bureaucratic, and technical concerns. This thesis contains policy recommendations around technical standards, investment and acquisitions, and regulations regarding using autonomous cyber capabilities to address these challenges. Along with the policy recommendations, the core technical recommendation that enables autonomous cyber systems is the safe and effective deployment of human-machine interfaces to direct and control them. This thesis argues that interfaces are not merely supporting tools but are, in fact, the central technical mechanism for enabling traceability, oversight, and control in autonomous cyberspace operations. The future development and integration of autonomous cyber systems must&#13;
prioritize interface design tailored to varying degrees of autonomy and operator control.&#13;
&#13;
The technical portion of this thesis explores different interfaces for autonomous cyber systems, utilizing distinct models of autonomy within the Cyber Operations Research Gym (CybORG) simulation environment. Each interface corresponds to the three human-machine relationships discussed, which include a semi-autonomous interface (human in the loop), a supervised autonomous interface (human on the loop), and a fully autonomous interface (human out of the loop). These interfaces serve as a proof of concept, providing evidence that different levels of autonomy can be implemented on the same autonomous cyber system. Additionally, the use of LLMs to explain the actions taken by autonomous cyber systems is explored.&#13;
&#13;
Ultimately, this thesis contributes technical and policy recommendations for navigating the future of autonomous cyber warfare. As autonomous systems evolve in sophistication and capability, the U.S. military must adopt policy and technical mechanisms that enable autonomy without sacrificing oversight, accountability, or effectiveness.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Response of Arabidopsis to bacterial presence under iron stress</title>
<link href="https://hdl.handle.net/1721.1/162448" rel="alternate"/>
<author>
<name>Kitzinger, Katherine A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162448</id>
<updated>2025-08-22T03:06:25Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Response of Arabidopsis to bacterial presence under iron stress
Kitzinger, Katherine A.
Iron availability is essential for the normal function of plants, but it becomes less available for uptake under drought. A lack of iron can lead to early senescence, fewer and less nutritious crops, and in extreme cases, plant death. In response to these stressful conditions, microbial interactions can lead to improved plant health, however the mechanism by which this occurs is not understood. In this study we cocultured an Arabidopsis MTP8 knockout line, which is susceptible to iron stress, as well as a subset of a previously established synthetic microbial community derived from healthy Arabidopsis roots. We cocultured the Arabidopsis lines and bacteria under three different iron levels in a hydroponics system and measured the dry weight and chlorophyll content ten days post inoculation. This study aims to narrow the potential mechanism of the beneficial effects of bacteria on plants experiencing nutrient stress.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interactive Topology Optimization with Hybrid Truss and Continuum Elements Types</title>
<link href="https://hdl.handle.net/1721.1/162447" rel="alternate"/>
<author>
<name>Zhang, Eileen</name>
</author>
<id>https://hdl.handle.net/1721.1/162447</id>
<updated>2025-08-22T03:06:19Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interactive Topology Optimization with Hybrid Truss and Continuum Elements Types
Zhang, Eileen
Topology optimization is a rising tool in structural design that can improve material efficiency and promote sustainability. However, currently topology optimization is not greatly used in the industry due to the nature of its user-unfriendly, high computation cost and difficulty in manufacturability. This thesis proposes a new framework combining traditional discrete topology optimization with truss elements and continuum elements topology optimization in creating a more informed algorithm suitable for more practical design scenarios. In addition, the drawing toolkit is also introduced in helping users better interact with the system in outputting their desired outcome. The hybrid element type topology optimization is achieved by creating separate local stiffness matrices and mapping them respectively to the same global design space to perform optimization together. The interactive drawing functions are used as add-in truss members that users can select the amount and draw in the length and locations of them in the design space. This framework is tested on multiple topology optimization classic problems including cantilever beam with bracings and MBB beam. All draw-in truss hybrid topology optimized results show a more efficient design results with lower compliance and overall lower material quantity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Inconsistent Results of Table Transformer for Improved&#13;
Data Extraction in Childhood Obesity Intervention Literature</title>
<link href="https://hdl.handle.net/1721.1/162445" rel="alternate"/>
<author>
<name>Neupane, Pragya</name>
</author>
<id>https://hdl.handle.net/1721.1/162445</id>
<updated>2025-08-22T03:06:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Inconsistent Results of Table Transformer for Improved&#13;
Data Extraction in Childhood Obesity Intervention Literature
Neupane, Pragya
Tables in scientific literature are rich sources of structured data, yet their complex and variable formats pose challenges for automated extraction. This thesis focuses on improving the reliability of Table Structure Recognition (TSR) using the Table Transformer (TATR) model, with a specific application to childhood obesity intervention studies. While fine-tuning TATR on a domain-specific dataset improves detection metrics, persistent errors such as overlapping rows and misclassified header columns remain. Through a systematic post-hoc error analysis of 175 scientific tables, we identify these dominant failure modes and develop lightweight post-processing modules: an overlap-aware row filtering algorithm and an OCR-enhanced column boundary correction method. Importantly, instead of relying on computationally expensive large language models (LLMs), this approach leverages efficient, interpretable techniques tailored to the domain-specific structure of public health tables. Our combined method reduces the proportion of structurally erroneous tables from 46.3% to an estimated 9.7–12.6%, improving the semantic alignment and interpretability of model outputs. This work contributes a transparent and scalable pipeline that enhances the trustworthiness of automated table extraction systems, with direct relevance to evidence-based decision-making in public health.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Opportunities to Reduce Carbon Dioxide Emissions from Electric Arc Furnace Steelmaking in the United States</title>
<link href="https://hdl.handle.net/1721.1/162444" rel="alternate"/>
<author>
<name>Colcord, Christopher C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162444</id>
<updated>2025-08-22T03:06:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Assessing Opportunities to Reduce Carbon Dioxide Emissions from Electric Arc Furnace Steelmaking in the United States
Colcord, Christopher C.
Steel is energy- and CO₂ emissions-intensive to produce, but it is also a crucial material for infrastructure, defense, and the energy transition. This thesis focuses on Electric Arc Furnace (EAF) steelmaking, which accounts for roughly 70% of steel production in the United States. Decarbonization levers for EAF producers are diverse—encompassing energy efficiency (EE) measures, fuel switching, material input substitution, development of onsite carbon-free electricity (CFE) generation, CFE procurement through power purchase agreements (PPAs) or unbundled renewable energy credits (RECs), and negative-emissions credit purchases, among others. We first construct a techno-economic model that analyzes costs and emissions of individual EAF facilities in the United States under a business-as-usual (BAU) scenario for the years 2025 through 2035. We then calculate the Levelized Cost of Carbon Abatement (LCCA) of various decarbonization levers against the BAU counterfactual. We build aggregate LCCA curves to draw insights on least-cost emissions abatement strategies for facilities and opportunities for policy to accelerate decarbonization decisions.&#13;
&#13;
We find that the modeled levers collectively deliver a 46% reduction in EAF CO₂ emissions versus the BAU case—equivalent to a reduction of roughly 1.7% of national industrial CO₂ emissions. Voluntary CFE procurement has the greatest potential to abate EAF emissions, but comes with large uncertainties. Onsite CFE and PPAs have negative LCCAs in most cases, whereas unbundled RECs have positive LCCAs. EE measures provide modest emissions reductions and costs are negative on a levelized basis under a wide range of assumptions. EE opportunities, onsite CFE, and PPAs may be bound by non-financial constraints. Direct reduced iron (DRI) with carbon capture has lower variable costs and produces fewer emissions versus hydrogen-based DRI in most cases. While the challenges to decarbonize EAF steelmaking are immense, we find EAF facilities can take actionable steps in the near term—supported by federal and state policies—to abate carbon emissions while reducing levelized costs.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Capacity of Generative AI to Learn&#13;
Genotype-by-Environment Interactions in Brachypodium&#13;
distachyon</title>
<link href="https://hdl.handle.net/1721.1/162443" rel="alternate"/>
<author>
<name>Neufeldt, Charlie</name>
</author>
<id>https://hdl.handle.net/1721.1/162443</id>
<updated>2025-08-22T03:06:12Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Investigating the Capacity of Generative AI to Learn&#13;
Genotype-by-Environment Interactions in Brachypodium&#13;
distachyon
Neufeldt, Charlie
Climate change exacerbates environmental stressors such as drought, challenging the resilience of agricultural systems and highlighting the need to understand plant genomic architecture and its responses to such environmental variation. A key molecular mechanism underlying these responses is transcriptional plasticity: environment-induced changes in gene expression that vary among genotypes, representing one way that genotype-by-environment (GxE) interactions manifest at the molecular level. While transcriptomic data offers a unique and powerful view into these responses, traditional modeling approaches often rely on linear assumptions, limiting their ability to detect complex, nonlinear patterns of regulation. This thesis investigates whether generative machine learning modeling, specifically the use of transformers, can extract biologically meaningful representations of gene expression dynamics in plants. Inspired by the successes of the scGPT model for human genomics, I developed and trained a compact transformer architecture, the PlantGeneEncoder, on bulk RNA-seq data from two natural accessions of Brachypodium distachyon grown under drought and control conditions. The model was trained on binned expression values using both a baseline configuration and a set of regularized variants incorporating noise injection, co-expression preservation, entropy-based sample weighting, and masked gene modeling as a self-supervised objective. While baseline models achieved perfect reconstruction accuracy, they failed to preserve meaningful biological structure in the latent space. Regularized models achieved a better trade-off, maintaining high reconstruction fidelity while demonstrating improved genotype classification performance and modestly better alignment with the original expression structure. However, environmental condition signals remained difficult to capture across all configurations, with classification accuracies only marginally above random chance. These findings highlight the promise and limitations of transformer-based generative modeling for plant transcriptomics and provide a flexible framework for future efforts to model transcriptional plasticity and regulatory responses to environmental stress.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Numerical Analysis of Human-Informed Topology Optimized Lateral-Load-Resisting Systems of Tall Buildings under Seismic Excitation</title>
<link href="https://hdl.handle.net/1721.1/162438" rel="alternate"/>
<author>
<name>Blaze, Edie</name>
</author>
<id>https://hdl.handle.net/1721.1/162438</id>
<updated>2025-08-22T03:06:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Numerical Analysis of Human-Informed Topology Optimized Lateral-Load-Resisting Systems of Tall Buildings under Seismic Excitation
Blaze, Edie
In the construction industry, structural, architectural, and environmental considerations can often be at odds with each other, leading to inefficient structures and, consequently, material waste. Topology optimization has shown promise as one potential solution to this problem, offering designs that are both structurally efficient and aesthetically interesting. However, topology-optimized designs are often difficult to manufacture or do not take into consideration other aspects that are crucial in the construction industry. Human-informed topology optimization, or HiTop, is a previously-developed algorithm that allows users to edit areas of interest, providing a computationally-efficient solution to address concerns with the designs. This paper uses MATLAB to apply HiTop to the design of the lateral-load-resisting systems of tall buildings, comparing results to those of three other designs: a “human” design with standard cross bracing, a optimized design using classical topology optimization, and a previously-developed algorithm which optimizes designs under a sum of modal compliances formulation, similar to how structures are analyzed in seismic codes. The designs are evaluated quantitatively, comparing natural periods, modal displacements, sum of modal compliances using modal decomposition, as well as computation time. They are also evaluated qualitatively, as HiTop is used to modify designs to improve constructability and aesthetics. The HiTop algorithm successfully created manufacturable, aesthetic designs in line with the user’s goals across a range of H/B ratios within a brief time frame. HiTop designs also performed similarly to the classically optimized designs, indicating that modifications to an optimized design to improve manufacturability, aesthetics, or other potential goals of a user do not significantly decrease structural performance under seismic loading.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peatland burning identification among other wildfires across different ecozones in Canada</title>
<link href="https://hdl.handle.net/1721.1/162437" rel="alternate"/>
<author>
<name>Chen, Ming</name>
</author>
<id>https://hdl.handle.net/1721.1/162437</id>
<updated>2025-08-22T03:06:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Peatland burning identification among other wildfires across different ecozones in Canada
Chen, Ming
The unprecedented severity of the 2023 Canadian wildfires highlights growing concerns about the vulnerability of global peatlands—key ecosystems storing substantial amounts of terrestrial carbon. Peatlands, traditionally resistant to burning, are increasingly at risk due to climate-induced warmer and drier conditions. This study specifically investigates the extent and characteristics of peat burning in the 2023 Canadian wildfires based on available remote sensing data. The primary objective is to determine whether fires on peatlands demonstrate distinct fire behavior compared to fires on non-peatland. To achieve this goal, this study utilized statistical tools and machine learning algorithms, including power-law relationship estimates, Mann-Whitney U test, K-means clustering, and generalized additive model (GAM) to identify the contribution of peat presence to fire behaviors. Key findings demonstrate that fires on peatland are significantly more intense, longer-lasting, and associated with higher carbon emissions. Even though peat combustion can not be confirmed without field validations, these results underscore the critical importance of the potential impact of peat on wildfire growth and management. By highlighting the disproportionate impact of peat burning, this study provides a foundation for future research aimed at developing targeted remote sensing techniques and policy responses to mitigate peatland vulnerability and preserve vital carbon stores in the context of global climate change.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable Engineering of Polyethylene Fiber Materials: Advancing Functional Properties of Diverse Textile-Based Structures</title>
<link href="https://hdl.handle.net/1721.1/162434" rel="alternate"/>
<author>
<name>Huynh, Amy</name>
</author>
<id>https://hdl.handle.net/1721.1/162434</id>
<updated>2025-08-22T03:06:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sustainable Engineering of Polyethylene Fiber Materials: Advancing Functional Properties of Diverse Textile-Based Structures
Huynh, Amy
This thesis explores pathways to circularity for polyethylene-based textiles through an integrated framework that combines material experimentation, systems-level policy analysis, and cultural innovation. Focusing on olefin block copolymer (OBC) filaments—engineered with semicrystalline polyethylene hard segments and elastomeric soft blocks—the study evaluates their mechanical behavior across a range of stitch-based textile geometries. Cyclic and postfatigue tensile testing reveals how formulation and structure shape energy dissipation and durability, informing design strategies for high-performance applications such as intra-vehicular spacesuits and wearable technologies. To understand the broader systems context, the thesis analyzes barriers to integrating recycled polyethylene (rPE) into textile supply chains, identifying economic, legal, institutional, technological, firm-level, and societal constraints. It proposes targeted strategies based on global policy trends, EU case studies, and a geospatial analysis of U.S. recycling infrastructure. Finally, the work explores how generative AI can revitalize traditional craft practices—such as bobbin lace—by co-creating patterns designed for both aesthetic and functional performance in new materials. Together, these efforts propose a model for advancing sustainable textile innovation that bridges material science, circular design, and policy transformation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fueling Conflict: A Global Dataset of Energy Protests</title>
<link href="https://hdl.handle.net/1721.1/162432" rel="alternate"/>
<author>
<name>Harrison, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/162432</id>
<updated>2025-08-22T03:06:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Fueling Conflict: A Global Dataset of Energy Protests
Harrison, Ethan
How do popular grievances about (the lack of) access to energy lead to political violence and instability? I use a mixed-methods approach to answer this question, based on a qualitative case study in Sri Lanka and a quantitative framework for tracking energy protests worldwide. Specifically, through an analysis of the 2022 Aragalaya protest movement in Sri Lanka, I elaborate on how breakdowns in state capacity to provide energy to its citizens can trigger civilian unrest. Building on this case study, as well as insights from the empirical literature on the drivers of instability related to energy access, I then pilot a machine learning (ML) framework to identify energy-related protest events in the Armed Conflict Events Database (ACLED) based on context-specific keywords, which results in the creation of the first global dataset on energy protests. This novel source of evidence, in turn, will open new avenues for research on the conflict-energy nexus, particularly on the impact of market shocks on civilian unrest and instability in low- and middle-income countries – a topic for which current empirical work is limited. I show how the ML framework I develop here can be used to enable continuous monitoring of protest activity related to energy access, as well as how the framework can be extended to other forms of political violence, offering a promising tool for peace-building initiatives across contexts. Therefore, such a framework could inform key evidence to support policymakers, practitioners, and researchers in the design of strategic policies that facilitate the provision of energy while mitigating the risk of conflict and instability worldwide, particularly in "energy-poor" countries.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Circular Lunar Economy: Incentivizing the Design of Multi-Purpose Reusable Lunar Landers and Rovers</title>
<link href="https://hdl.handle.net/1721.1/162431" rel="alternate"/>
<author>
<name>Khan, Nadia Rehman</name>
</author>
<id>https://hdl.handle.net/1721.1/162431</id>
<updated>2025-08-22T03:06:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards a Circular Lunar Economy: Incentivizing the Design of Multi-Purpose Reusable Lunar Landers and Rovers
Khan, Nadia Rehman
Both NASA and ESA have committed to establishing a lasting presence on the Moon by 2030. However, lunar surface debris has already exceeded 200,000 kg, prompting concerns about the environmental, operational, and economic viability of future missions. This thesis proposes that circular economy principles—particularly reusability, modularity, and interoperability—must be embedded in early mission architecture to reduce waste, improve system longevity. To evaluate these goals, this thesis introduced a novel decision-support framework, the Lunar Exploration Impact Assessment (LEIA), alongside a policy-informed set of Lunar Surface Sustainability Guidelines (LSSG). Both decision support tools were designed help mission designers and space policy stakeholders to incentivize the design of resilient reusable lunar landers and rovers. In this thesis the LEIA framework, was applied to two case studies: NASA JPL’s EnduranceA autonomous lunar sample return rover, and ESA’s multi purpose Argonaut lander to evaluate the sustainability of each spacecraft after the EOL/M phase of each mission. Scores were computed using a Multi Criteria Decision Analysis (MCDA) approach. Seven Impact Assessment Indicators (IAI)s were considered, to assign a sustainability rating for each mission: Cost-effectiveness, environmental impact, science value, redundancy, resilience, strategic value, and technological feasibility. The Endurance-A mission achieved a sustainability score of 66.4%, based on a sample collection post primary mission scenario, indicating moderate sustainability across some categories such as cost-effectiveness 18.9% and technological feasibility 12%. However, the environmental impact score was limited to 7.7%, due to the out-gassing and launch emissions associated with the SpaceX Starship lander. The rovers redundancy and maintainability ratings also constrained the overall sustainability rating – highlighting a gap in the availability of tools suitable for EVA-based repairs on the lunar surface. Subsystems most at risk of degradation—mobility, thermal, and power—require enhanced design for long-term reuse scenarios. Each of these factors were made salient through the Argonaut case study, indicating that in the short to medium term in order to prevent the accumulation of lunar surface debris lunar rovers and landers must be designed to be more resilient to the conditions of the lunar environment. To supplement the LEIA framework, a set of policy recommendations were developed in order to address the lack of End of Life (EOL) procedures in place to manage lunar surface debris – in the form of retired lunar missions. The guidelines detailed how economic policy mechanisms adopted in circular economy systems could be leveraged to incentivize the design of sustainable lunar surface missions and operations.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beam Mechanism Failure in Multistory Steel Frame Structures</title>
<link href="https://hdl.handle.net/1721.1/162430" rel="alternate"/>
<author>
<name>Hashbarger, Brad</name>
</author>
<id>https://hdl.handle.net/1721.1/162430</id>
<updated>2025-08-22T03:06:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Beam Mechanism Failure in Multistory Steel Frame Structures
Hashbarger, Brad
Engineers must ensure that building structures are not in danger of collapse, so analyses always include safety factors that create redundant yet materially inefficient buildings. This has been common practice for most of structural history, but today, growing concerns for carbon emissions force designers to cut material usage while retaining the same level of safety. Processes opt into one of two processes: an overall lighter unit or stiffening specific internal systems to encourage a load path. The problem with either of these options lies in progressive collapse in the event of structural damage. If one column is lost, stresses propagate either until equilibrium or a larger collapse occurs. Progressive collapse remains a popular research area to identify specific vulnerabilities, often with numerical models for a visualization of each stress state and redundant capacity. Previous studies used analytical and experimental performance to observe the critical effects of losing an external versus internal column and the role of other components, such as joints, joists, and composite slabs, to carry additional loads. However, designs and analyses are bound by assumptions that govern model behavior. To understand the sensitivity and limits of these assumptions, this thesis predicts the performance of steel moment-frame structures of varying bay geometries, proposing deflection fields to inform modern practice in all phases of project development. Instead of numerical simulations, the process follows an analytical approach based in the fundamental methods of equilibrium and the conservation of work and energy. By designing sections for their elastic capacity, their operational performance is directly linked to their failure response. This suggests the dominance of design preferences in stability, even with changes in beam spans or floor loading. Results support an optimal span ratio for plasticity under two-way load distributions that favors bay geometry ratios (L1/L2) between 1 and 2 but varies based on failure locations and how many columns have been lost. This also emphasizes the weaknesses out of plane as span ratios range from 0.5 to 1. Project layouts can utilize the free strength provided by bay geometries as part of the structural design process. If large deflections or span lengths are expected, beam depth and section thickness should increase together to ensure beams utilize their full plastic capacity to achieve additional redundancy from catenary action. Overall, the thesis demonstrates that such considerations in the early design stage can enable steel structures to achieve greater safety with less material.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating Aboveground Biomass (AGB) Throughout the Pacific</title>
<link href="https://hdl.handle.net/1721.1/162428" rel="alternate"/>
<author>
<name>Domingo-Kameʻenui, Joy P.</name>
</author>
<id>https://hdl.handle.net/1721.1/162428</id>
<updated>2025-08-22T03:06:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Estimating Aboveground Biomass (AGB) Throughout the Pacific
Domingo-Kameʻenui, Joy P.
Aboveground biomass (AGB) is a signiﬁcant carbon pool in forests, making AGB a good indicator of forest health and carbon storage. AGB has been studied on multiple scales, in which allometric equations were developed to ﬁnd relationships between AGB and tree parameters. However, despite the presence of AGB studies for speciﬁc sites in the Paciﬁc Islands, there is a lack of AGB comparative studies or data syntheses focused on the Paciﬁc Islands as a whole. This study synthesized data on AGB, tree height H, land cover, and Paciﬁc Island forest community to develop allometric equations using linear and polynomial regression models for trees in the Paciﬁc based on H as the main parameter. This study found polynomial relationships between AGB and H for shrub and herbaceous covers. Speciﬁcally, AGB = 1.76 H^2 + -51.01 H + 346.53 for shrub cover (adjusted R^2 = 0.94, n = 39), and AGB = 1.11 H^2 + -81.97 H + 1167.20 for herbaceous cover (adjusted R^2 = 0.71, n = 79). However, future research and data collection would be necessary to develop allometric equations for tree cover and barren land cover. No signiﬁcant correlation was found between AGB and H for Paciﬁc Island forest community. This study may help with forest management and conservation practices, along with carbon sequestration and storage practices in forests, in the Paciﬁc Islands. This study may also contribute to Paciﬁc-led climate change mitigation and adaptation methods and initiatives.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-powered Data Mining for the Development of Sustainable Concrete Materials</title>
<link href="https://hdl.handle.net/1721.1/162427" rel="alternate"/>
<author>
<name>Duan, Yifei</name>
</author>
<id>https://hdl.handle.net/1721.1/162427</id>
<updated>2025-08-22T03:06:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AI-powered Data Mining for the Development of Sustainable Concrete Materials
Duan, Yifei
Data mining has become essential to contemporary industrial and scientific research, playing a pivotal role in uncovering insights from large-scale industrial datasets and literature collections. The sustainable transition of the concrete industry, a major contributor to global CO₂ emissions, demands both operational optimization and scientific innovation. This thesis presents comprehensive data mining frameworks for both industrial and literature source data to support the development of more sustainable concrete materials. Focusing on concrete manufacturing, we develop AI-powered methodologies tailored to real-world industrial data and complex scientific literature. For industrial data mining, we propose to incorporate interpretability and realistic engineering design scenarios to enhance the reliability of both predictive and prescriptive modeling of concrete mixes containing supplementary cementitious materials (SCMs). A domain-informed amortized Gaussian process and a shallow multi-layer perceptron (MLP) are shown to possess superior scientific consistency in predicting time-varied compressive strength, and time-invariant slump and air content properties, respectively. The explainable surrogate property models are applied in mix design optimization under a variety of realistic scenarios considering different engineering design requirements and SCM costs and densities. The importance of the comprehensive property constraint set is demonstrated in comparison against a baseline using only 28-day strength constraint which results in unreasonable property values. The necessity to differentiate realistic scenarios is also highlighted through the differences of optimized mixes and their production costs and climate impacts. Higher design strength, higher design slump, lower design air content, higher SCM density, and higher SCM unit cost can drive up the production costs. Though stratification patterns in the production costs of optimized mixes are observed across different scenarios, the mix-wise climate impacts are not clearly stratified, indicating that substantial emission reduction can be achieved without significantly increasing costs, regardless of the realistic scenarios. For literature mining, a novel method that finetunes lightweight large language models (LLMs) (pythia-2.8B) with multichoice instructions is developed. With the multifaceted linguistic complexity of communication within the domain rendering it infeasible to adopt the conventional named-entity-recognition approach, the new method successfully achieves great information inference accuracy in a time-, cost-, and computation-efficient manner, outperforming the GPT-3.5 in-context learning baseline by over 20%. A knowledge graph is constructed with the literature-mined data, offering insights to promote alternative material substitution strategies in concrete production as the current commercial SCMs are not comprehensively sustainable in the longer term. Statistical summary and temporal trend analyses are adopted to provide both static and dynamic insights into the research landscape. Although SCMs have remained a research hotspot, results revealed a systematic shift in recent studies from commercial SCMs to other materials. Geopolymer and fine aggregate studies have surged in the recent period, while clinker feedstock and filler studies have declined. A node similarity metric is modified to develop a model-free link prediction algorithm, enhanced with random graph perturbation for robustness and uncertainty quantification. Through link prediction, the currently underexplored lime-pozzolan cement application emerges as a potentially promising future research direction.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metabolic Scaling Analysis of Building Energy Efficiency: A Case Study of Massachusetts Institute of Technology</title>
<link href="https://hdl.handle.net/1721.1/162426" rel="alternate"/>
<author>
<name>Hsu, Yu-Hsuan</name>
</author>
<id>https://hdl.handle.net/1721.1/162426</id>
<updated>2025-08-22T03:06:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Metabolic Scaling Analysis of Building Energy Efficiency: A Case Study of Massachusetts Institute of Technology
Hsu, Yu-Hsuan
The building sector plays a critical role in global energy consumption and carbon emissions, accounting for 21% of global GHG emissions (12 GtCO₂-eq) and 31% of global final energy demand (128.8 EJ) in 2019 (Cabeza et al. 2022). This reality underscores the urgent need to enhance energy efficiency within the sector. This research applies ecological metabolic scaling principles to building energy analysis, utilizing the Massachusetts Institute of Technology (MIT) campus as a case study. Analogous to biological systems, where an animal’s metabolic rate scales to 3/4 power of its mass, our findings indicate that larger buildings, similar to larger organisms, are inherently more energy efficient.&#13;
Furthermore, an analysis of overall energy consumption at MIT from 2009 to 2020 reveals a steady decline, though not proportionally, as the scaling exponent fluctuated with a decreasing trend (&lt;3/4), indicating improved efficiency in larger buildings. However, the COVID-19 pandemic in 2020 acted as a major shock, disrupting this trend. This disruption was likely driven by operational and behavioral changes, including reduced occupancy, increased remote work, and adjustments to ventilation and heating systems to ensure health and safety. These shifts highlighted the system’s tendency to return to the baseline scaling exponent of 3/4, demonstrating regression to the mean and ultimately pushing efficiency back to its prior baseline level of 25%.&#13;
Additionally, the study includes case analyses of specific buildings on the MIT campus to provide deeper insight into comparative energy performance. While several guidelines for energy systems have been proposed, certain limitations remain. Future research should focus on expanding the dataset to help validate the applicability of these findings to other contexts while also accounting for variations in building types. Ultimately, this study aims to facilitate the development of more effective policies and innovations in building energy management.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases</title>
<link href="https://hdl.handle.net/1721.1/162424" rel="alternate"/>
<author>
<name>Grewal, Darshdeep</name>
</author>
<id>https://hdl.handle.net/1721.1/162424</id>
<updated>2025-08-22T03:06:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases
Grewal, Darshdeep
The urgent global transition to renewable energy is constrained by the intermittent nature of solar and wind sources, highlighting the critical need for scalable energy storage solutions. This thesis presents a comprehensive investigation into the development of structurally integrated supercapacitors based on carbon-doped cement composites, known as EC3 cells. These multifunctional materials combine structural performance with electrochemical energy storage capabilities, enabling integration directly into civil infrastructure. The research focuses on three essential challenges for real-world deployment: (1) replacing laboratory acrylic casings with hydrophobic sealants compatible with cementitious systems, (2) quantifying and mitigating shrinkage and swelling in nanocarbon cement matrices under electrolyte exposure, and (3) identifying corrosion-resistant current collectors that maintain conductivity and mechanical durability under harsh conditions. Bitumen-based coatings were found to be promising sealants for moisture containment. Shrinkage studies [ are underway, I will complete this part shortly]. Meanwhile, corrosion testing of various collector materials revealed that graphene sheets and stainless steel–reinforced graphillic papers offered optimal trade-offs between conductivity, corrosion resistance, and mechanical performance. The thesis concludes with two field-implementation design proposals—a vertical column and a vaulted arch—both of which leverage compression to improve electrochemical contact and stability. Altogether, this work establishes a foundational framework for embedding energy storage directly into the built environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preserving Human Autonomy in AI-Mediated Negotiations</title>
<link href="https://hdl.handle.net/1721.1/162422" rel="alternate"/>
<author>
<name>Chen, J. Alvin</name>
</author>
<id>https://hdl.handle.net/1721.1/162422</id>
<updated>2025-08-22T03:05:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Preserving Human Autonomy in AI-Mediated Negotiations
Chen, J. Alvin
The rapid integration of generative artificial intelligence (AI) into negotiation and conflict resolution processes raises critical ethical concerns about the erosion of human autonomy, particularly when AI systems navigate irreconcilable “sacred” values (non-negotiable moral principles) alongside transactional “mundane” interests. This thesis investigates whether generative AI can be designed to recognize and respect important values and beliefs while preserving human agency in decision-making. Drawing on datasets from a repository of large language model (LLM) prompts tested in simulated negotiation scenarios, this study employs a mixed-methods approach to evaluating AI’s efficacy in balancing efficiency with ethical imperatives in negotiation. Quantitative metrics (enumerating the outcomes of two-party negotiations) are analyzed alongside qualitative assessments of values such as transparency and consent, drawn from Kantian ethical frameworks.&#13;
&#13;
My analysis reveals that while AI negotiating bots excel in trades across mundane, tradable interests they struggle to navigate beliefs and values without oversimplifying moral reasoning or obscuring cultural considerations. These findings inform policy recommendations, including a call for human-in-the-loop validation and technical safeguards for protecting important values in efforts to incorporate AI-assistance into negotiations. By bridging technical analysis and ethical theory, I hope this research contributes to improvements in designing autonomy-preserving AI systems for use in a range of negotiating settings, prioritizing human dignity alongside computational efficiency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Effective System Architectures for Cislunar&#13;
Space Situational Awareness</title>
<link href="https://hdl.handle.net/1721.1/162417" rel="alternate"/>
<author>
<name>Rude, Connor D.</name>
</author>
<id>https://hdl.handle.net/1721.1/162417</id>
<updated>2025-08-22T03:06:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterizing Effective System Architectures for Cislunar&#13;
Space Situational Awareness
Rude, Connor D.
Achieving Space Situational Awareness (SSA) in the Cislunar region—the area between the geosynchronous belt and the Moon's gravitational boundary—poses significant technological and organizational challenges. Instead of proposing new theoretical systems, this thesis employs the Architecting Innovative Enterprise Strategy (ARIES) Framework to evaluate existing SSA architectures and previously suggested solutions. ARIES provides a structured assessment through its elements (strategy, information, infrastructure, products, services, processes, organizations, and knowledge), identifying infrastructure, acquisition strategies, policy-driven timelines, and communication structures as key areas for improvement. Stakeholder objectives, current initiatives, and operational needs guide the characterization of an ideal SSA architecture.&#13;
&#13;
Four prior system proposals for cislunar SSA are assessed using qualitative analysis of existing literature and first-order physics-based simulations. These evaluations correlate specific design features with enhanced system suitability. Particularly beneficial are constellation proximity to targets, strategic constellation placement and phasing, sensor orbital diversity, and orbital stability. Additionally, certain design strategies consistently yield higher suitability, including focusing on underserved SSA regions, leveraging heritage technology, and optimizing designs for ride-share launch compatibility.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LLM-Supported Natural Language to Bash Translation</title>
<link href="https://hdl.handle.net/1721.1/162415" rel="alternate"/>
<author>
<name>Westenfelder, Finnian Ellis</name>
</author>
<id>https://hdl.handle.net/1721.1/162415</id>
<updated>2025-08-22T03:05:59Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">LLM-Supported Natural Language to Bash Translation
Westenfelder, Finnian Ellis
The Bourne-Again Shell (Bash) command-line interface for Linux systems has complex syntax and requires extensive specialized knowledge. Using the natural language to Bash command (NL2SH) translation capabilities of large language models (LLMs) for command composition alleviates these issues. However, the NL2SH performance of LLMs is difficult to assess due to inaccurate test data and unreliable heuristics for determining the functional equivalence of Bash commands. We present a manually verified test dataset of 600 instruction-command pairs and a training dataset of 40,939 pairs, increasing the size of previous datasets by 441% and 135%, respectively. Further, we present a novel functional equivalence heuristic that combines command execution with LLM evaluation of command outputs. Our heuristic can determine the functional equivalence of two Bash commands with 95% confidence, a 16% increase over previous heuristics. Evaluation of popular LLMs using our test dataset and heuristic demonstrates that parsing, in-context learning, in-weight learning, and constrained decoding can improve NL2SH accuracy by up to 32%. Additionally, we consider military use cases for NL2SH models and discuss the limitations of current Department of Defense documentation standards for LLMs. We write and publish documentation for our models and datasets to promote safe use. Our findings emphasize the importance of dataset quality, execution-based evaluation, translation method, and proper documentation for advancing NL2SH translation and enabling responsible use. Our code is available at https://github.com/westenfelder/NL2SH.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redefining “Space Sustainability” for Launch Vehicles: Forecasting the Atmospheric Impact of the Commercial Space Launch Industry in 2050</title>
<link href="https://hdl.handle.net/1721.1/162406" rel="alternate"/>
<author>
<name>Ma, Clara Z.</name>
</author>
<id>https://hdl.handle.net/1721.1/162406</id>
<updated>2025-08-22T03:06:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Redefining “Space Sustainability” for Launch Vehicles: Forecasting the Atmospheric Impact of the Commercial Space Launch Industry in 2050
Ma, Clara Z.
Discussions on “space sustainability” have largely centered on orbital debris, the burnup of vehicles during atmospheric reentry, and the resulting emissions. However, few studies have examined emissions from the launches themselves. Along with reentry burnup, rocket launches are the only source of high altitude anthropogenic emissions. At such high altitudes, emitted particles can remain in circulation for years. With the annual growth rate of the commercial launch industry averaging 14.6% in the last 4 years and over 211 launches in 2023 alone, our research on the atmospheric impact of launch vehicles comes at a crucial point in the policy debate on space sustainability.&#13;
&#13;
This thesis outlines several potential future scenarios of the launch industry in 2050, with all the vehicles in each scenario using the same fuel type. We examine these four launch scenarios—a kerosene (RP-1) launch industry, a methane (CH4) launch industry, a hydrogen (H2) launch industry, and a control or “baseline” scenario without launches. For each scenario, we estimate the number of launches for a distribution of heavy-lift launch vehicles across origin spaceports. We simulate the chemical interactions of the launch plumes with the atmosphere using the global atmospheric chemistry model GEOS-Chem High Performance (GCHP). Finally, we quantify the steady state impact of launch emissions on stratospheric ozone and surface air quality.&#13;
&#13;
We find that the black carbon emitted by kerosene and methane rockets causes an indirect increase in stratospheric ozone due to the removal of NOx, with ozone column change averaging 5.07 Dobson Units (DU) and 1.26 DU respectively; hydrogen rockets cause a net decrease in ozone column averaging -0.11 DU. The population-weighted average surface ozone impact is -0.286 ppb, -0.068 ppb, and 0.023 ppb for RP-1 rockets, CH4 rockets, and H2 rockets respectively. The population-weighted average surface PM2.5 impact is -0.031 μg/m3, -0.004 μg/m3, and 0.002 μg/m3 for RP-1, CH4, and H2 rockets respectively. Although RP-1 and CH4 rockets decrease surface ozone and surface PM2.5, H2 rockets have the smallest magnitude impacts on the atmosphere overall. Our findings have important implications for commercial launch providers, research institutions, and policymakers including the Federal Aviation Administration (FAA) and NASA.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Trust and Technology Optimism in the Workforce: Data-Driven Insights into Regional Variation</title>
<link href="https://hdl.handle.net/1721.1/162404" rel="alternate"/>
<author>
<name>Velonia Bellonia, Maria Eleni</name>
</author>
<id>https://hdl.handle.net/1721.1/162404</id>
<updated>2025-08-22T03:05:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">AI Trust and Technology Optimism in the Workforce: Data-Driven Insights into Regional Variation
Velonia Bellonia, Maria Eleni
Automation and AI systems are reshaping the workplace. How these technologies make a difference varies according to local contexts. Workers’ willingness to trust and embrace these technologies is shaping how this transformation unfolds in practice. Some workers trust AI more than others, and interestingly, trust levels differ from one region to another. Drawing on a far-reaching 2024 worker survey spanning different countries, and on a rich body of literature on technology, trust, and change, this work examines how key factors influencing workers’ AI trust and technology optimism interweave, shaping their perspectives on new technologies and automation. The focus is on understanding how the industrial and regulatory landscape in which workers operate, combined with their personal experiences with AI, shapes their AI optimism, with a particular emphasis on the US and Europe. While external market innovation indicators provide limited understanding of workers’ technology optimism, individual interaction and familiarity with AI, alongside organizational AI adoption and a worker’s industry of employment, emerge as key factors shaping AI trust. Additionally, the regulatory environment, encompassing technology governance, social safety nets, and workers’ institutional trust, all seem connected with how workers think about the impact of new technologies on society, the economy, and their jobs. Interpersonal trust propensity contributes to AI trust formation, though its relevance exhibits regional variation. By offering insights into the critical factors shaping the relationship between workers and AI, this study aims to provide evidence that supports societies in unlocking the value of emerging technologies, while empowering the workforce to confidently embrace and excel alongside them.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Ridership and Travel Time Impacts of Bus Service Changes Using Sketch Planning Methods</title>
<link href="https://hdl.handle.net/1721.1/162329" rel="alternate"/>
<author>
<name>Lim, Tiffany M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162329</id>
<updated>2025-08-12T03:07:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Predicting Ridership and Travel Time Impacts of Bus Service Changes Using Sketch Planning Methods
Lim, Tiffany M.
Bus service changes range in scale, and understanding their impacts on ridership and travel times can inform decision-making as changes are considered for the bus network. Budgetary limitations are at the heart of service change decisions, resulting in the need for analysts to assess different scenarios and accommodate quick turnarounds. This thesis provides a sketch planning framework for predicting ridership and travel time impacts of bus service changes, with a focus on direct demand models and the use of an open-source multimodal routing algorithm. The framework is designed to be streamlined with the use of data sources and capabilities, such as exporting a General Transit Feed Specification (GTFS) feed of a given bus network scenario, that agencies may have access to through existing transit planning tools.&#13;
&#13;
Direct demand models are developed to estimate bus ridership at the level of approximately one-mile route-segments and time-of-day periods. This level of analysis provides a more disaggregated evaluation of bus ridership than past direct demand models. The models are sensitive to both route and network improvements. New variables designed to capture the relationship between bus routes, including the competitive and complementary nature of routes, are introduced and incorporated in the model development process. These models are developed for the Washington Metropolitan Area Transit Authority (WMATA). A case study analyzing two scenarios in WMATA's Better Bus Network Redesign (BBNR) is presented, with selected route examples to illustrate how the models capture different types of service changes. These routes fall under three categories: routes with no major service changes, routes with improvements in frequency, and routes with re-routing and other improvements.&#13;
&#13;
An open-source multimodal routing algorithm, available through an R package called r5r, is used for travel time analysis. r5r calculates a distribution of door-to-door travel times for a given origin-destination (OD) matrix and returns a selected percentile value from the distribution for each OD pair. The percentile parameter is calibrated through a comparison of estimated travel times and actual travel times recorded in origin-destination-interchange inference (ODX) data. Low percentile values were found to provide travel times close to actual travel times. Additional guidance is provided for interpreting travel times from r5r, and use cases related to calculating travel time impacts between scenarios and evaluating rail competitiveness for a given bus network are explored.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensing and Predicting Urban Rail Platform Crowding Using Emerging Data Sources</title>
<link href="https://hdl.handle.net/1721.1/162328" rel="alternate"/>
<author>
<name>Fiorista, Riccardo</name>
</author>
<id>https://hdl.handle.net/1721.1/162328</id>
<updated>2025-08-12T03:07:28Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Sensing and Predicting Urban Rail Platform Crowding Using Emerging Data Sources
Fiorista, Riccardo
Rail platform crowding poses serious challenges to passenger safety, operational performance, and service quality in urban rail transit systems. This thesis investigates the short-term forecasting of platform-level crowding, focusing on enhancing prediction accuracy, spatial granularity, and operational interpretability through multi-source data integration. We first employ a gradient-boosted tree regression model (LightGBM) to leverage fare card transaction, vehicle location, weather, and public event data from the Washington Metropolitan Area Transit Authority (WMATA) to forecast platform-level occupancies 15–60 minutes ahead of time. Our results show significant improvements over a WMATA-internal baseline while providing a robust data preparation and prediction pipeline. Subsequently, we explore integrating platform-level CCTV data to overcome the lack of real-time crowding estimates. Using a custom-collected image dataset and three computer vision methods, namely object detection (YOLOv11, RT-DETRv2) and head counting (APGCC), crowd-level classification (Crowd-ViT), and semantic image segmentation (DeepLabV3), we demonstrate that estimated counts from calibrated image segmentation maps enable accurate real-time estimation of platform crowding. Additionally, we show that these estimates can correct and improve 15-minute horizon predictions when incorporated with a stochastic gradient-boosted tree learner such as LightGBMLSS. Finally, we extend the time series modeling framework by incorporating network-wide causal influences through an analysis driven by Empirical Dynamic Modeling and Convergent Cross Mapping. We show that accounting for network effects improves predictive performance, particularly for platforms characterized by regular low-occupancy patterns, improving the prediction of anomalies. The work presented in this thesis extends the existing literature on short-term platform crowding prediction, offering new methodologies to incorporate emerging CCTV data and causal network effects for increased prediction accuracy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Thinking as an Analytical Lens for Bilateral International Development: Lessons from the Harbor Reconstruction Project in Jamestown, Accra</title>
<link href="https://hdl.handle.net/1721.1/162327" rel="alternate"/>
<author>
<name>Avis, Victoria</name>
</author>
<id>https://hdl.handle.net/1721.1/162327</id>
<updated>2025-08-12T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Spatial Thinking as an Analytical Lens for Bilateral International Development: Lessons from the Harbor Reconstruction Project in Jamestown, Accra
Avis, Victoria
This thesis examines transcalar tensions that emerge from urban infrastructure development projects funded through bilateral foreign assistance mechanisms. Using a mixed-methods case study approach to gather data from a wide variety of historical and contemporary primary and secondary sources, this research centers a harbor revitalization and port reconstruction project in Jamestown, a historic fishing community in Accra, Ghana. Having coordinated plans with the Ghanaian national government, a Chinese state-owned construction firm began working on the port in 2020. In 2024, the revitalized harbor and expanded port were officially handed over to the government of Ghana in a widely attended ceremony. The spatial implications of this physical urban infrastructure project across international, national, municipal, and local levels are complex and interrelated. Therefore, this case study is especially relevant at a historical moment when the nature of bilateral engagement may be undergoing significant transformation. &#13;
&#13;
This thesis argues that spatial thinking, a foundational concept in urban planning, is a necessary analytical lens to incorporate within international development practice. Despite its relevance, spatial thinking has not been meaningfully incorporated into international development policy or implementation. Therefore, this thesis seeks to bridge epistemic gaps between urban planning and international development by advancing a spatial thinking framework, adapted for use in international development contexts. In doing so, this thesis envisions a future for bilateral development assistance that delivers equitable and sustainable development outcomes across scales of engagement. This approach, rooted in spatial thinking, intends to respond to local community needs and aspirations, capacitate municipal governments, align with national priorities, and accommodate geopolitical dynamics that facilitate bilateral project implementation.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solid-State NMR Characterization of a PET Ligand Binding Sites in AD Tau Fibrils</title>
<link href="https://hdl.handle.net/1721.1/162319" rel="alternate"/>
<author>
<name>Angehrn Rodas, Frida Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/162319</id>
<updated>2025-08-12T03:07:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Solid-State NMR Characterization of a PET Ligand Binding Sites in AD Tau Fibrils
Angehrn Rodas, Frida Nicole
Aggregation of the tau protein into fibrils is a key feature of Alzheimer's disease (AD) and many other neurodegenerative disorders. Developing small molecules that bind these tau fibrils is important for the diagnosis and treatment of tauopathies. This thesis revolves around a study on the binding sites of a positron emission tomography (PET) ligand, PI-2620, to a recombinant tau construct that adopts the C-shaped AD fold. Using solid state NMR experiments in combination with other techniques such as Transmission Electron microscopy (TEM) as well as docking simulations allowed a better understanding of the binding sites of this PET agent. Specifically, 13C-19F REDOR experiments were used to identify nearby residues to the ligand. PI-2620 was found to bind two primary sites within the C-shaped structure. The docking simulations allowed the proposition of several possible binding poses. Additional 2D NMR experiments suggest that PI-2620 alters the protofilament interfaces. The stoichiometry of PI-2620 binding to tau fibrils was determined to be approximately 20 mol%, with varying degrees of ligand mobility. These findings offer insights into the interaction of this PET tracer with tau fibrils and have implications for the design of improved imaging agents.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Demand‑Driven Decarbonization: Impact of Voluntary 24/7 Low-Carbon Power Procurement</title>
<link href="https://hdl.handle.net/1721.1/162318" rel="alternate"/>
<author>
<name>Ali, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/162318</id>
<updated>2025-08-12T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Demand‑Driven Decarbonization: Impact of Voluntary 24/7 Low-Carbon Power Procurement
Ali, Adam
This thesis examines the impact of voluntary 24/7 (hourly) low-carbon power procurement on grid-wide emissions and investment strategies in generation technologies. Recognizing the growing number of businesses and government agencies making voluntary commitments to reduce greenhouse gas emissions (GHGs) through increased procurement of low-carbon power, this study investigates the effectiveness of these commitments, particularly those aiming for hourly matching of low-carbon energy with consumption. &#13;
&#13;
This study employs GenX, an open-source capacity expansion model, to simulate an electricity market with two classes of buyers. Buyers in one class commit to reduce the carbon intensity of their electricity procurement by some amount, while buyers in the other class procure electricity at minimum cost without any regard to carbon emissions. This setup allows for a detailed examination of how different levels of ambition in voluntary hourly low-carbon commitments influence the electricity system and investment strategies. The study tests both a simpler model without storage and demand-response capabilities and a more complex model that incorporates these elements to assess their impact on meeting hourly clean energy targets.&#13;
&#13;
Our findings suggest that at low to moderate ambition levels of hourly low-carbon electricity procurement, the buyers with voluntary commitments can primarily "reshuffle" built low-carbon generation without incentivizing new clean capacity additions or achieving measurable reductions in system-wide emissions. Significant shifts in generation investments and decreases in total carbon emissions are observed only when commitments exceed a critical threshold, ranging from approximately 70% to 96%, depending on the facts of the system, which happen to be reflected in different model set-ups. Even then, cost-minimizing behavior in voluntary procurement can distort investment, spurring excessive wind and solar builds that exceed what a least‑cost, socially-optimal zero‑carbon portfolio would require.&#13;
&#13;
In conclusion, for voluntary 24/7 procurement to cut emissions materially—and avoid misallocating capital—either ambition must be extremely high or participation must broaden enough to share costs and benefits. Otherwise, committed buyers bear steep costs, non‑participants enjoy spill‑over gains, and the system drifts toward a sub‑optimal technology mix.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clarifying Decision Making Processes: Tools for Interdependency Modeling</title>
<link href="https://hdl.handle.net/1721.1/162317" rel="alternate"/>
<author>
<name>Baker, Ellie F.</name>
</author>
<id>https://hdl.handle.net/1721.1/162317</id>
<updated>2025-08-12T03:07:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Clarifying Decision Making Processes: Tools for Interdependency Modeling
Baker, Ellie F.
Tools for problem specification in AI Decision making are underdeveloped at present. I propose two new tools for this purpose; first, a model of AI Decision Making, which supports problem identification and mitigation. Second, a Bill of Assumptions for Data Production. Data is an important component of AI Decision Making Systems, and data is necessarily produced by making a series of assumptions. My Bill of Assumptions for Data Production is a new approach to communicating these assumptions that facilitates collaboration, data transparency, and reduction of harmful bias. I illustrate this new approach by developing a dataset that estimates the distribution of Government education spending in the US across income deciles. My dataset informs existing Distributional National Accounts (DINA), which are a primary measure of income inequality in the US (Piketty et al., 2018). My estimate shows Government education spending is more progressive than assumed in current DINA. Furthermore, I show that removing federal education funding to postsecondary institutions would produce substantial harm.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transforming geospatial textual data into narrative storytelling visualization</title>
<link href="https://hdl.handle.net/1721.1/162315" rel="alternate"/>
<author>
<name>Ma, Ruixian</name>
</author>
<id>https://hdl.handle.net/1721.1/162315</id>
<updated>2025-08-12T03:06:57Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Transforming geospatial textual data into narrative storytelling visualization
Ma, Ruixian
Current large language models (LLMs) often struggle to integrate geospatial data into dynamic, interactive visualizations, relying instead on text-based outputs. This limitation hinders the full potential of geospatial data to convey complex information through narrativedriven communication, making it difficult for users to interpret the data easily. Meanwhile, existing data visualization tools typically depend on static dashboards and rigid scientific formats, which have a steep learning curve and lack engagement through narrative elements. Audiences, however, are increasingly drawn to story-driven presentations, as seen in platforms pioneered by the MIT Senseable City Lab, and widely popularized by The New York Times and the Washington Post, which use narrative data visualization formats to attract and immerse readers. This gap between the capabilities of current LLM-based tools and users’ preferences presents a unique opportunity to develop a narrative-based geospatial visualization tool that meets these needs. This tool could transform how we communicate spatial data, particularly in fields such as journalism, travel planning, and urban planning, where the ability to convey complex patterns in an engaging manner is essential.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Post-Pandemic Urban Activity &amp; Mobility Regime: Implications for Adaptation and Future Planning of Cities and Public Transit Systems</title>
<link href="https://hdl.handle.net/1721.1/162314" rel="alternate"/>
<author>
<name>Leong, Chee Weng Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/162314</id>
<updated>2025-08-12T03:06:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Quantifying the Post-Pandemic Urban Activity &amp; Mobility Regime: Implications for Adaptation and Future Planning of Cities and Public Transit Systems
Leong, Chee Weng Michael
Between 2019 and 2022, a pattern break during the COVID-19 pandemic introduced consequential changes to the trajectory of urban activity and mobility patterns. This thesis advances both theoretical and practical understandings of this evolving post-pandmemic regime of activity and mobility, as well as its implications for the future of cities and public transit systems, using high-resolution location-based services data and a case study within the Washington, DC metropolitan area. First, a custom analysis framework is developed where geographical units - subcenters and neighborhoods - are designed to provide insight at an interpretable scale that corresponds to policy and business decision making. Second, a custom suite of twelve mobility metrics are curated to distill the applicability of post-pandemic changes in travel patterns to business problems (site selection, network planning, and operations planning) and societal outcomes (social fabric, quality of life, and environmental sustainability). To complement spatial analysis, these metrics are also regressed on socio-economic attributes to provide greater explanatory power. Lastly, key trends in post-pandemic activity and mobility are distilled into eight mega-trends, and their implications for the adaptation of public transportation systems and future urban development are discussed, including complexity from divergent definitions of success among different stakeholders.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Small Stores, Big Obstacles: Understanding Constraints and Opportunities for Micro-retail Firms</title>
<link href="https://hdl.handle.net/1721.1/162313" rel="alternate"/>
<author>
<name>Cervantes Gil, Sergio Yael</name>
</author>
<id>https://hdl.handle.net/1721.1/162313</id>
<updated>2025-08-12T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Small Stores, Big Obstacles: Understanding Constraints and Opportunities for Micro-retail Firms
Cervantes Gil, Sergio Yael
Micro and small enterprises (MSEs), particularly informal micro-retailers known as nanostores, play a vital role in developing economies but remain largely underserved by traditional financial institutions and overlooked in economic policy. In Mexico, nanostores account for more than 95% of businesses and over 10% of national employment, yet face high closure rates, low productivity, and limited access to formal credit. This thesis asks: What structural and contextual factors determine the survival and performance of nanostores, and how can policy better support high-potential firms within this segment? To answer this, the study constructs a longitudinal panel of nanostores using microdata from the Mexican Economic Census (2009, 2014, and 2019), and combines it with municipality-level contextual data including crime, infrastructure, unemployment, electricity costs, and business regulations. It applies survival models to estimate firm closure dynamics and implements a misallocation framework to quantify distortions in capital and labor usage. The results reveal that misallocation—particularly of capital—is pervasive and systematically linked to institutional weaknesses and credit access constraints. In response to the limited real-time data available for this sector, the thesis proposes the LIFT Performance Index, developed by the MIT Low-Income Firms Transformation Lab (MIT LIFT Lab), as a diffusion-based tool for monitoring micro-retailers’ business sentiments using structured operational surveys. A pilot implementation in Argentina demonstrates the index’s potential to generate timely and actionable insights for policymakers and private stakeholders. Overall, this work contributes a novel empirical foundation for understanding heterogeneity within the micro-retail sector and offers a scalable framework for designing targeted, data-driven interventions to support inclusive economic development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maximizing Flexibility and Efficacy of Undersea Wireless Power Transfer Systems</title>
<link href="https://hdl.handle.net/1721.1/162309" rel="alternate"/>
<author>
<name>Gess, Derek</name>
</author>
<id>https://hdl.handle.net/1721.1/162309</id>
<updated>2025-08-12T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Maximizing Flexibility and Efficacy of Undersea Wireless Power Transfer Systems
Gess, Derek
Autonomous underwater vehicles (AUVs) are an ever-increasingly essential tool for ocean-based applications, whether it be scientifically, economically, or militarily. To advance the capabilities of AUVs, it is crucial to improve the mission time and length of these vehicles. One proposed way to achieve this is with remote undersea wireless power transfer (WPT) systems to allow AUV charging from remote areas of the ocean floor. While there has been significant research in WPT system design, these projects often tailor the design specifications towards a specific AUV shape, size, or power requirement. These point designs have wildly different power outputs, efficiencies, coupling coefficients, sizes, and more, making it difficult to understand how the design parameters affect each of these properties. This paper aims to address this knowledge gap in current undersea WPT systems by designing an equivalent circuit framework for a WPT system with a targeted power output of ~1 kW to show how design parameters such as input voltage, coil size, transfer gap, coupling coefficient, and load resistance affect the power output and efficiency of the charger. Furthermore, the effects of misalignment in vertical and lateral directions for two separate compensation networks – series-series (SS) and series-parallel (SP) – are compared to determine which compensation network would perform best under specified circumstances. The paper then addresses the losses associated with a conductive environment by coupling the circuit model with an electric field model in seawater. The impact of undersea losses on system metrics is quantified, showing a 3% decrease in efficiency as compared to in air. Finally, the study investigates the use of magnetic cores in WPT systems for EM shielding and field-shaping characteristics. A design methodology is introduced to rank material properties based on the desired system performance characteristics. Suggested materials are then chosen according to this ranking and tested using the models derived in the study. By mapping both electrical and magnetic-core design spaces in a conductive seawater environment, this thesis delivers a unified methodology for designing scalable, efficient undersea wireless chargers.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>City as Seed: The Urban Resonance Field and the Case for Sonic Awareness in Ecological Renewal</title>
<link href="https://hdl.handle.net/1721.1/162306" rel="alternate"/>
<author>
<name>Navarro, Cadine</name>
</author>
<id>https://hdl.handle.net/1721.1/162306</id>
<updated>2025-08-12T03:06:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">City as Seed: The Urban Resonance Field and the Case for Sonic Awareness in Ecological Renewal
Navarro, Cadine
Seeds, the “abominable mystery” (Darwin, 1897), hold our past and potential future. They also hold sound. Much like cities, they are sites of growth, transformation, and resilience. This thesis draws parallels between laboratory research on the sensing capacities of seeds and embodied experiences of sensing within urban landscapes, exploring how living systems interact with sound and vibration. Through both scientific and poetic approaches, it examines how seeds respond to sonic environments and how this sensitivity can inform human engagement with acoustics in the urban context. The investigation of intangible forces, vibration, resonance, and sound reveals a shared responsiveness between seeds and cities, documented through graphs, sound spectra, and reflective narratives that bridge science and art. Focusing on sound as a strategic lens, this work brings attention to often-overlooked sensory domains, inspiring a more ecologically and socially responsive urbanism. Ultimately, it advocates for practices of deeper listening as a method to engage openly and imaginatively with human and nonhuman worlds, and to reimagine urban environments as spaces of attunement, dialogue, and co-existence.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical Vulnerabilities of AI in Latin America</title>
<link href="https://hdl.handle.net/1721.1/162299" rel="alternate"/>
<author>
<name>Dobles Camargo, Claudia</name>
</author>
<id>https://hdl.handle.net/1721.1/162299</id>
<updated>2025-08-12T03:06:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Critical Vulnerabilities of AI in Latin America
Dobles Camargo, Claudia
Artificial Intelligence (AI) is rapidly reshaping societies, economies, and governance systems worldwide. While it offers tools for addressing critical challenges—such as climate change, health care, and educational inequity—it also risks deepening historical inequalities, undermining democratic institutions, and exacerbating global technological dependencies if not ethically governed. Latin America faces unique vulnerabilities for the development and use of AI, currently underexplored in existing scholarship—such as informal data work or the territorial principle and its implications on AI law enforcement—this study investigates AI's critical vulnerabilities within the Latin American context in order to determine and provide regional and national policies to advance on an inclusive, strategic, and ethical approach for developing and deploying AI systems in Latin America. The study seeks to answer the question through a cross-analysis and comparative case study of six countries (Brazil, Chile, México Costa Rica, El Salvador and Honduras) drawing on existing and recent global and regional benchmarks, including the Stanford HAI AI Index (2024), UNESCO’s Recommendation on the Ethics of AI (2021), and the Latin American AI Index (ILIA 2024). The countries were selected based on a broad range of AI readiness levels, focusing on mapping institutional, regulatory, and socio-political contexts as well as metrics and input from relevant sources. The analysis shows structural inequality as the core vulnerability shaping AI’s impact in Latin America, alongside governance gaps, limited regional cooperation, and minimal public participation. The analysis identifies ten critical vulnerabilities—including the use of AI in surveillance, increase in inequality, increase in disinformation, AI-use in organized crime, and environmental exploitation—that, if unaddressed, may accelerate democratic erosion and technological dependency. Ethical principles are shown to be deeply interconnected and grounded in human rights, yet their implementation remains aspirational. This research underscores a call for action toward regional coordination, inclusive education strategies prioritizing gender policies and rural areas, and aligned industrial policies in the countries of the region. A Latin American context-specific, collective approach ensures that AI serves the public interest, strengthens sovereignty, and supports equitable development in Latin America.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurements on contact potentials of metals</title>
<link href="https://hdl.handle.net/1721.1/162239" rel="alternate"/>
<author>
<name>Zisman, William A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162239</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1928-01-01T00:00:00Z</published>
<summary type="text">Measurements on contact potentials of metals
Zisman, William A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1928; Includes bibliographical references (leaves 59-60).
</summary>
<dc:date>1928-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A department store fore the Hudson Bay Company Edmundton, Alberta, Canada</title>
<link href="https://hdl.handle.net/1721.1/162238" rel="alternate"/>
<author>
<name>Thrift, Eric W.</name>
</author>
<id>https://hdl.handle.net/1721.1/162238</id>
<updated>2025-08-07T03:07:29Z</updated>
<published>1938-01-01T00:00:00Z</published>
<summary type="text">A department store fore the Hudson Bay Company Edmundton, Alberta, Canada
Thrift, Eric W.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1938
</summary>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Community Retrofit Trust: Incentivizing Deep Energy Retrofits in Massachusetts' Triple Deckers</title>
<link href="https://hdl.handle.net/1721.1/162155" rel="alternate"/>
<author>
<name>Chuttani, Milan</name>
</author>
<id>https://hdl.handle.net/1721.1/162155</id>
<updated>2025-07-30T03:06:43Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Community Retrofit Trust: Incentivizing Deep Energy Retrofits in Massachusetts' Triple Deckers
Chuttani, Milan
To meet its 2050 net-zero carbon emissions goals, Massachusetts must rapidly retrofit its aging stock of three-story multi-family homes, also known as “Triple Deckers.” However, high upfront capital costs, disparities between subsidized gas and electric energy rates, complex eligibility criteria, and misaligned incentives for landlords and renters constrain the widespread adoption of deep energy retrofits (DERs) in small multi-family homes. &#13;
&#13;
Drawing on energy democracy and reparative planning theory, this thesis reframes Triple Decker retrofits as a pathway to social and spatial transformation that empowers residents through cooperative participatory processes. This project proposes a practical framework for a “Community Retrofit Trust” which uses systems of distributed energy savings, community ownership of DER assets, and cooperative governance to ensure tenants, building owners, and neighbors in environmental justice communities share benefits from DERs while maintaining rental affordability. A proposed values-based decision-making process also helps community cooperatives adapt the Retrofit Trust’s framework to their unique social contexts.&#13;
&#13;
Descriptive case studies of two community solar initiatives illustrate how cooperative approaches that build trust, bundle projects and local expertise, and expand opportunities for participation can efficiently distribute energy benefits across a community while increasing investment and lowering costs. A feasibility analysis of a Community Retrofit Trust in Boston examines the strengths, challenges, and contradictions of incentivizing Triple Decker DERs through a cooperative approach.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rebuilding Civic Infrastructure for Equitable Development: Intermediary Solutions for Transforming Resource-Extractive Economies in Rural Southwest Arkansas</title>
<link href="https://hdl.handle.net/1721.1/162154" rel="alternate"/>
<author>
<name>Bradford, Mo</name>
</author>
<id>https://hdl.handle.net/1721.1/162154</id>
<updated>2025-07-30T03:08:32Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Rebuilding Civic Infrastructure for Equitable Development: Intermediary Solutions for Transforming Resource-Extractive Economies in Rural Southwest Arkansas
Bradford, Mo
Southwest Arkansas, a rural and mineral-rich region, is entering a new wave of resource-driven economic activity fueled by lithium extraction. While local leaders are pushing for rapid industry development to counter long-standing socioeconomic decline, this research asks a critical question: Can these pro-industry strategies truly deliver equitable and lasting public benefits, or will they repeat historical patterns of extraction that have sidelined local communities?&#13;
This study critiques neoliberal development schemes and neoconservative, sectionalist ideologies that deprioritize equity-driven agendas and prioritize deregulation and private sector efficiency, arguing that such approaches often weaken institutional civic organizing and reduce responsiveness to public needs. As an alternative, it proposes civic infrastructure as a strategic solution, one that strengthens the networks of community institutions, local governments, and intermediary organizations essential for advancing equity in extractive economies.&#13;
The research further explores the role of intermediary organizations in bridging institutional and capacity gaps in Southwest Arkansas. These organizations can support under-resourced communities by providing convening power, technical assistance, and financial resources. &#13;
Through policy analysis, case studies, and field interviews, this work examines how civic infrastructure and intermediary support can work together to shift economic development toward more just and inclusive outcomes in resource-extractive economies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Girls Just Wanna Have Fun, How Do They Go? A Mixed Methods Study of Nighttime Leisure Travel in Boston</title>
<link href="https://hdl.handle.net/1721.1/162153" rel="alternate"/>
<author>
<name>Dy, Raelene Ina Bianchi Louise Mendez</name>
</author>
<id>https://hdl.handle.net/1721.1/162153</id>
<updated>2025-07-30T03:08:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">When Girls Just Wanna Have Fun, How Do They Go? A Mixed Methods Study of Nighttime Leisure Travel in Boston
Dy, Raelene Ina Bianchi Louise Mendez
When we think of urban living and its depictions in popular culture, many shows and movies depict characters in leisure activities, such as meeting friends, going on dates or pursuing hobbies, often at night. Despite the prominence of the night as a key theme in depictions of urban leisure, transportation planners have rarely focused on nighttime leisure travel as an area of intensive study beyond the lens of safety. This thesis investigates the nighttime leisure travel patterns of residents and students in Greater Boston through statistical analysis and data sculpture with a focus on how these vary by gender. To create a baseline understanding of travel patterns, I focused on the Boston Metropolitan Area and used the most recent version of the Massachusetts Department of Transportation’s Household Travel Survey from 2011. I limited my analysis to a fixed set of leisure activities during a fixed nighttime period to understand associated travel behaviors. I also implemented a data sculpture method to investigate how a subset of MIT students made decisions around their travel modes. I found that women travelled differently from men, in that they spent more time walking and were more likely to be passengers in a car. In contrast, men were more likely to be behind the wheel and travel further. Both men and women showed a preference for walking over all other modes when leaving an activity.  Together, these findings indicate that nighttime leisure travel is not a simple extension of daytime patterns. To better design nighttime transportation that accommodates gender differences, planners need to respond to the special qualities of the city after dark.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Economic Reevaluation of Navi Mumbai and the Indian Satellite City</title>
<link href="https://hdl.handle.net/1721.1/162150" rel="alternate"/>
<author>
<name>Thomas, Archer</name>
</author>
<id>https://hdl.handle.net/1721.1/162150</id>
<updated>2025-07-30T03:08:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">An Economic Reevaluation of Navi Mumbai and the Indian Satellite City
Thomas, Archer
Navi Mumbai, a municipality in the Mumbai Metropolitan Region, is the largest satellite city project in India. Nevertheless, it has been seen within the planning discipline as underperforming its original ambitions. Drawing upon the goals enumerated in the city’s original development plan, this thesis proposes a series of quantitative metrics corresponding to said goals and then utilizes data drawn from surveys, censuses, official reports, financial statements, and remote sensing datasets to propose an updated evaluation of Navi Mumbai’s performance over the past half-century. This thesis argues that, contrary to earlier perceptions, Navi Mumbai has largely succeeded in fulfilling its ambitions, and that this can be attributed to shifting suburbanization patterns in India, the prescient decision to prioritize office-based service industries over manufacturing, and the ongoing reconfiguration of transportation and logistics networks within the Mumbai region. Reflecting on the history of urban and economic planning in India, this thesis then suggests the implications of Navi Mumbai’s apparent success for satellite city projects in India and across the Global South, focusing on questions of financing and governance.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Private Sector in Public Transit: Evaluating Early US Experience in P3s</title>
<link href="https://hdl.handle.net/1721.1/162149" rel="alternate"/>
<author>
<name>Farabow, Web</name>
</author>
<id>https://hdl.handle.net/1721.1/162149</id>
<updated>2025-07-30T03:08:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Private Sector in Public Transit: Evaluating Early US Experience in P3s
Farabow, Web
Problems in US public transit are well documented: transit providers struggle to develop new infrastructure, face high project costs and long implementation timelines, pursue designs that prioritize ease of delivery over value to the public, and struggle to sustain their operations. In response to these challenges, Public-Private Partnerships (“P3s” or “PPPs”) have been promoted as a way to deliver more infrastructure on faster timelines at lower cost and higher quality. As P3s have been increasingly considered for major transit projects, this thesis investigates their ability to deliver on promotional claims, and their ability to address key challenges in American public transportation.&#13;
&#13;
First, the thesis contextualizes contemporary P3s within a history of private sector involvement in US public transit. In addition to detailing how existing infrastructure came to be, this history intends to sharpen an understanding of contemporary P3s by considering how forms of private involvement have changed over time. It proceeds to develop detailed case studies for three major infrastructure projects that have proceeded under a P3 model: RTD’s Eagle P3 in Denver, Maryland MTA’s Purple Line in Southern Maryland, and Los Angeles Metro’s Sepulveda Transit Corridor Project. Combining historic research and contemporary case study analysis, the thesis seeks to understand the circumstances under which contemporary P3s have emerged, and to draw lessons from early experience.&#13;
&#13;
American transit providers have considered P3s for a variety of reasons, but have been primarily motivated by limited administrative and financial capacity, and by a perceived ability of private firms to deliver projects on faster timelines. Early P3s have facilitated provision, enabling projects that otherwise may not have been built, and have demonstrated their potential to ensure sustainable operations over long-term contract periods. But P3s have achieved mixed results in accelerating project timelines, and their ability to reduce lifecycle project costs remains unclear. While P3s seek to increase private involvement in transit provision, the model places a higher burden on upfront public planning compared to conventional delivery strategies. Public infrastructure owners can design P3s to leverage private sector resources and capacity, but the model comes with tradeoffs that should be carefully weighed against likely benefits. Ultimately, P3s can address a number of acute challenges in American public transit, but are unlikely to provide a workaround to fundamental political and financial challenges that limit transit development more broadly.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reparative Preservation through Immersive 3D Documentation: Cultural Memory, Spatial Justice, and Gullah Geechee Futures on Daufuskie Island</title>
<link href="https://hdl.handle.net/1721.1/162147" rel="alternate"/>
<author>
<name>Jones, Wil</name>
</author>
<id>https://hdl.handle.net/1721.1/162147</id>
<updated>2025-07-30T03:08:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reparative Preservation through Immersive 3D Documentation: Cultural Memory, Spatial Justice, and Gullah Geechee Futures on Daufuskie Island
Jones, Wil
This thesis advances a reparative framework for cultural preservation by combining immersive documentation with co-authored digital storytelling to support Black spatial memory and community sovereignty. Grounded in fieldwork on Daufuskie Island, South Carolina—a historic Gullah Geechee community confronting dispossession and cultural enclosure—the project co-creates Daufuskie3D (https://daufuskie3d.org/), an interactive website that presents annotated 3D scans, oral histories, ambient videos, and symbolic interface design rooted in Gullah epistemologies.&#13;
&#13;
It is guided by two research questions: How can immersive documentation support reparative preservation for communities at risk of spatial erasure? And what frameworks—technical, ethical, and political—ensure digital practices reflect Black cultural values, descendant authorship, and community control? Drawing from Black geographies, wake work, vernacular cartography, and speculative design, the thesis introduces a conceptual distinction between visualization and analysis tools to examine how different modes of spatial capture shape visibility and authority. The project finds that immersive tools, when grounded in ethical design and descendant authorship, can function not simply as representational media but as reparative infrastructure—supporting visibility, stewardship, and spatial return in communities confronting erasure.&#13;
&#13;
The Daufuskie3D website serves as both platform and method. Its spatial interface draws on Gullah visual language, including Underground Railroad quilt codes and spiritual symbolism, while its non-linear navigation resists conventional heritage taxonomies. Rather than flattening culture into content, the site embraces ambiguity, withheld spatial detail, and narrative restraint as ethical design principles. Developed in partnership with Ms. Sallie Ann Robinson, a sixth-generation Gullah cultural steward, the project repositions preservation as participatory, situated, and future-facing. It offers Daufuskie3D as both a working prototype and a methodological contribution toward reparative immersive practice—centering digital preservation as a strategy of memory, sovereignty, and cultural regeneration within the Black diaspora.&#13;
&#13;
Keywords: Immersive Documentation, 3D Scanning / LiDar / Photogrammetry, Cultural Preservation, Gullah Geechee, Daufuskie Island, Reparative Preservation, Black Geographies, Digital Heritage, Speculative Design, Counter Cartography, Counterpublic, Spatial Justice, Oral History, Afrofuturism, Digital Public, Digital/ Web Archive, Cultural Stewardship, Ethical Design, Participatory Design, Underground Rail Road, Return
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward a political economy of the power sector: green capitalism, eco-socialism, and co-operative power in decarbonized climate policy</title>
<link href="https://hdl.handle.net/1721.1/162146" rel="alternate"/>
<author>
<name>Jin, Brooke</name>
</author>
<id>https://hdl.handle.net/1721.1/162146</id>
<updated>2025-07-30T03:08:17Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Toward a political economy of the power sector: green capitalism, eco-socialism, and co-operative power in decarbonized climate policy
Jin, Brooke
The political economy of the power sector has been characterized by a putative transition from fossil capitalism to green capitalism in an attempt to mitigate the worst effects of anthropogenic climate change on nature and society. In recent years the rise of green industrial policy, such as the passage of the Inflation Reduction Act of 2022, has sought to stimulate domestic economic development of green-technology projects and implement protectionist trade policies with the normative intent of protecting the geopolitical hegemony of U.S. industry. Yet the objectives of such industrial policies, which function less to reduce carbon emissions than to increase resource- and carbon-intensive consumption patterns, run antithetical to putative state objectives of the decarbonization of the power grid and industrial operations, and in fact green capitalism does not exist without the continued influence of fossil capital.&#13;
In this thesis I look to Marxist theories of the state, capital, labor, and nature to illustrate the crises of capitalism that have been occurring due to the exponential increase in power demand by data centers and large technology companies. In reshaping the governance of power markets, electricity generation, and transmission and distribution infrastructure through this increase in demand, called load growth, I show the illusion of sustainability under a green-capitalist political economy that purports to advance decarbonization goals, yet which in actuality facilitates conditions for the centralization and monopolization of private capital, as well as the continued destruction of nature and exploitation of workers. However, this crisis of load growth and the issue of governance that it raises open a window for experimentation into new state systems, socialized modes of production, and labor and environmental solidarity in the creation of a new climate policy: one that prioritizes equity, welfare, ecological preservation, and a truly decarbonized society. I propose a socialization of the power sector to increase community autonomy over their energy needs and to begin to dismantle the technocratic influence of fossil-fuel and large technology companies over electricity generation and access.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonization at the Neighborhood Scale: &#13;
Challenges, Learnings and Opportunities in an Emerging Model</title>
<link href="https://hdl.handle.net/1721.1/162145" rel="alternate"/>
<author>
<name>Cina-Sklar, Zoë</name>
</author>
<id>https://hdl.handle.net/1721.1/162145</id>
<updated>2025-07-30T03:08:14Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decarbonization at the Neighborhood Scale: &#13;
Challenges, Learnings and Opportunities in an Emerging Model
Cina-Sklar, Zoë
Decarbonizing residential buildings in the United States is critical for reaching climate goals and has significant public health and energy justice benefits if accessible to all. To date, building electrification has been individual-level and market-driven, with some financial incentives at the state and federal level. This model is generally inaccessible to low-income homeowners and renters who are unable to afford the upfront costs of building improvements and new electric appliances. Neighborhood-scale building decarbonization has been proposed as an alternative in which new developments would be built allelectric or existing buildings would be electrified at the block or neighborhood scale. In the latter use case, neighborhood-scale building decarbonization is often tied explicitly to decommissioning gas lines. Specifically, proponents posit that these projects could be funded through avoided gas line repair and replacement costs. Investor-owned utilities are seen by some experts in the space as key to the success of neighborhood-scale building decarbonization because of their financing capabilities and existing role in providing heating and/or electric service to customers. In recent years, a number of state policymakers have passed legislation approving utility-funded neighborhood-scale building decarbonization and state utility commissions have promulgated regulations approving cost recovery for these projects. Utilizing desk research and informant interviews, this paper analyzes what has enabled and hindered existing utility-funded neighborhood-scale building decarbonization pilot projects in California, Massachusetts, and New York. I diagnose strong and specific climate goals, the passage of enabling legislation, an engaged state utility commission, and strong advocacy ecosystems as key factors for initiating neighborhood-scale pilot projects. Through informant interviews, I identify costs, financing, community buy-in and planning as central determinants for the success of pilot projects and the future of the model. I close by offering recommendations and outstanding research areas for planners interested in pursuing future neighborhood-scale building decarbonization projects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Can planning, a tool for colonization, be decolonized?&#13;
MIT’s funding at the expense of Indigenous Peoples through the Morrill Act</title>
<link href="https://hdl.handle.net/1721.1/162144" rel="alternate"/>
<author>
<name>Barrera Gonzalez, Devora</name>
</author>
<id>https://hdl.handle.net/1721.1/162144</id>
<updated>2025-07-30T03:08:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Can planning, a tool for colonization, be decolonized?&#13;
MIT’s funding at the expense of Indigenous Peoples through the Morrill Act
Barrera Gonzalez, Devora
This thesis questions whether planning and the activities the profession’s umbrella covers are beneficial or harmful. The project analyzes the role of planning in the colonization of Turtle Island by materializing and legitimizing the seizure of Indigenous Land through planning practices like urbanization, enclosure, the creation of Indian reservations, and tools like cartography, lawfare, and landscape architecture and design. I make an argument in this thesis about how there is no such thing as sustainable or beneficial urbanization because urbanization equals death, that planning is inherently harmful because it was born as a tool of colonization, and that there is no way to decolonize the profession, given that the profession upholds the current land system, I make an argument that the only solution to reverse and undo the harm done by planning and urbanization is to give Land Back to Indigenous Peoples. For this, building my argument, I will walk you through the narrative built to dispossess land, the concept of imaginary geography, how planning enabled and legitimized diferent ways for land dispossession, and finally, the modification of land (urbanization). A chapter is dedicated to looking closer at one piece of lawfare in particular: the morrill act, revealing the history of the foundation of MIT at the expense of Indigenous Peoples, the role that universities play in the maintenance and strengthening of the systems of oppression in place. Using that information to answer the calls for decolonization of the profession, this thesis makes an argument and underscores that, given that planning is born as a tool for colonization, the profession can’t be decolonized and demands Land Back as the only solution. The thesis presents the information on two parcels that belong to the Confederate Tribes of Coos, Lower Umpqua, and Siuslaw Indians, located in the state of Oregon, that were seized and, through the morrill act, resold with the proceeds benefitting MIT, calling for the restitution of the parcels and giving Land Back.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equity and Climate Resilience in Bogotá's Public Space Policy: A Critical Policy Review</title>
<link href="https://hdl.handle.net/1721.1/162143" rel="alternate"/>
<author>
<name>Duque Añez, Silvia</name>
</author>
<id>https://hdl.handle.net/1721.1/162143</id>
<updated>2025-07-30T03:08:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Equity and Climate Resilience in Bogotá's Public Space Policy: A Critical Policy Review
Duque Añez, Silvia
In Bogotá, where long-standing spatial and social inequalities intersect with growing climate risks, public space policy holds the potential to either reinforce exclusion or promote resilience and justice. Decisions about parks, plazas, and green corridors are not neutral; they reflect political priorities, embedded values, and power dynamics. This thesis asks: To what extent, and in what ways, does Bogotá’s public space policy framework incorporate criteria of equity and climate resilience? Through this question, the research examines how policies define and implement these concepts, what types of interventions they promote, and what limitations may emerge.&#13;
While prior research has emphasized the importance of inclusive and adaptive public spaces, there is limited analysis of how these principles are embedded in policy instruments in Latin American cities. Addressing this gap, this thesis develops an analytical framework informed by literature on urban environmental justice and climate adaptation. This framework serves as both an evaluative tool and a resource for policymakers seeking to move beyond vague commitments and toward actionable pathways for equity and climate resilience. &#13;
The framework is used to analyze two key policy instruments: the District Public Space Policy (Política Pública Distrital de Espacio Público 2019-2038) and the Master Plan (Plan de Ordenamiento Territorial: Bogotá Reverdece 2022-2035). The evaluation reveals that both perform well, reflecting a genuine political effort to prioritize these issues. However, the findings also show that narrow or inconsistent interpretations of equity and climate resilience can lead to unintended consequences, and that significant implementation challenges remain. By grounding its analysis in a Global South context, this thesis contributes to international conversations on urban sustainability, offering both a critical lens and a practical tool. Ultimately, this research advocates for a shift in public space governance, one that treats equity and resilience not as aspirational ideals, but as measurable, structural commitments to a more just and climate-ready urban future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interest Group Politics in U.S. "Social Housing" Experiments</title>
<link href="https://hdl.handle.net/1721.1/162142" rel="alternate"/>
<author>
<name>Davidson, Zak</name>
</author>
<id>https://hdl.handle.net/1721.1/162142</id>
<updated>2025-07-30T03:08:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Interest Group Politics in U.S. "Social Housing" Experiments
Davidson, Zak
The rising cost of housing has renewed interest in public sector-led models of mixed-income housing production. Advocates, local governments, and state lawmakers are exploring strategies to involve the public sector more directly in the residential development process by capitalizing revolving loan funds, leveraging public land, and creating new public authorities. While a universal definition for “social housing” remains elusive, most policymakers and supporters agree that social housing is permanently affordable for economically and racially diverse households and includes elements of resident self-governance. This research analyzes how key interest groups—including affordable housing developers, tenant advocates, labor unions, market-rate developers, and pro-housing coalitions—shape and respond to emerging social housing initiatives. Drawing on interviews and case studies of Seattle, Montgomery County (MD), California, New York, Atlanta, and Chattanooga between 2019 and 2025, this thesis examines how political context, institutional constraints, and coalition dynamics influence how social housing proposals are framed, negotiated, and either supported or resisted by key stakeholders. Four key themes emerge from these case studies. First, existing affordable housing developers often interpret new mixed-income, permanently affordable proposals as competition, particularly amidst resource scarcity and institutional constraints. This constitutes a substantial roadblock for the social housing movement. Second, proponents’ theory of change, initiative branding, and their ability to participate in multi-issue bargaining notably impact how affordable housing interest groups respond. Third, private sector actors’ support appears dependent on the public sector’s willingness to partner and how proponents describe the problem they are solving. Fourth, while collaborations around social housing may trigger fault lines between YIMBYs and tenant justice groups regarding revenue neutrality and the value of new market-rate supply, social housing represents an opportunity for bridge-building and collaboration across the housing movements. As interest in these models grows, this research offers practical insights for advocates and policymakers seeking to design locally tailored, politically viable approaches to public-led, mixed-income housing production.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shifting Spaces: Housing and Urban Change in Kabul</title>
<link href="https://hdl.handle.net/1721.1/162141" rel="alternate"/>
<author>
<name>Ghanizada, Bibi Khadija</name>
</author>
<id>https://hdl.handle.net/1721.1/162141</id>
<updated>2025-07-30T03:07:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Shifting Spaces: Housing and Urban Change in Kabul
Ghanizada, Bibi Khadija
This thesis explores the evolution of Kabul’s housing landscape with a focus on the emergence of Shahraks (planned townships) after 2001. Drawing on historical research, four case studies (Aria City, Khwaja Rawash Township, Khushal Khan Mena Blocks, and Omid-e-Sabz Township), and interviews with residents and experts, it analyzes how Shahraks have reshaped urban development in a rapidly growing city. Inspired by Soviet-era Mikrorayons, Shahraks introduced formal infrastructure, legal recognition, modern amenities, and opportunities for new economic activity. They helped expand Kabul’s formal housing stock and created pockets of urban community identity. However, the research finds that Shahraks also deepen spatial and socioeconomic inequalities. Largely built through private investment and targeting wealthier residents and civil servants, they remain inaccessible to the majority of Kabul’s population. Many Shahraks were developed on contested or illegally grabbed land, raising concerns about&#13;
tenure security and governance. Despite improved infrastructure compared to informal settlements, Shahraks often suffer from poor climate responsiveness, environmental degradation, limited green spaces, and energy-intensive designs. Their weak integration with Kabul’s broader urban fabric further exacerbates issues of spatial fragmentation. Looking ahead, the thesis argues that Kabul must learn from both the achievements and shortcomings of Shahraks as it plans&#13;
future projects like Kabul New City. Their model is not inherently unsustainable or inaccessible, but without deliberate reforms, Kabul risks reproducing a cycle where contemporary urban development becomes synonymous with exclusion, fragmentation, and missed opportunity. Key recommendations include prioritizing affordable and expandable housing models, enforcing transparent land governance, promoting climate-adaptive design, strengthening connections&#13;
between housing and employment centers, and carefully structuring public-private partnerships to align private investment with public goals. As Kabul embarks on projects like Kabul New City, it must learn from the partial successes and profound shortcomings of past developments.&#13;
The challenge is not simply to build new cities, but to build a more inclusive, adaptable, and sustainable urban future for all Kabulis.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When Public Space Goes Digital: Rethinking Urban Planning with Insights from Letra Ese</title>
<link href="https://hdl.handle.net/1721.1/162137" rel="alternate"/>
<author>
<name>Chiappero, Sofia Belen</name>
</author>
<id>https://hdl.handle.net/1721.1/162137</id>
<updated>2025-07-30T03:07:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">When Public Space Goes Digital: Rethinking Urban Planning with Insights from Letra Ese
Chiappero, Sofia Belen
Digital public spaces have become vital for organizing, belonging, and community-building, particularly for marginalized groups such as the LGBTQ+ community, who are increasingly excluded from both physical and online public spaces. Yet, the design of these digital spaces is largely shaped by profit-driven interests rather than the needs of the communities that rely on them. This thesis addresses this gap by asking: What if we treated digital spaces with the same care and intention we demand from our physical public spaces?&#13;
&#13;
To explore this question, the thesis brings together frameworks from urban planning, LGBTQ+ advocacy, and digital design. It proposes a reframing of “urban planning” to include “digital urban planning,” grounded in principles of rights, care, safety, and collective memory. Through a feminist urbanist lens and systems thinking, the work challenges the separation between physical and digital cities.&#13;
&#13;
Methodologically, the project moves beyond traditional research approaches, incorporating Conversational Design and the Relational User Framework to co-create knowledge with activists. The resulting contributions include both a prototype and a roadmap for a digital public space that supports and amplifies LGBTQ+ advocacy; not as a technical fix, but as a speculative and participatory framework for reimagining digital public infrastructure.&#13;
&#13;
This research is grounded in a case study of Letra Ese, an activist-led LGBTQ+ organization in Mexico. The case illustrates how such groups navigate systemic neglect while leveraging technology to document violence and sustain community. Ultimately, the thesis offers a starting point for rethinking the design of digital public spaces and argues for the inclusion of digital environments within the domain of urban planning, recognizing that for many, especially marginalized communities, much of life is already lived online.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flooding as Remembering: A Trickster’s Guide to Fugitive Ecology, Revolutionary Recall, and Speculative Worldbuilding Beyond the Plantationocene</title>
<link href="https://hdl.handle.net/1721.1/162136" rel="alternate"/>
<author>
<name>Delaney, Simone Hope</name>
</author>
<id>https://hdl.handle.net/1721.1/162136</id>
<updated>2025-07-30T03:07:04Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Flooding as Remembering: A Trickster’s Guide to Fugitive Ecology, Revolutionary Recall, and Speculative Worldbuilding Beyond the Plantationocene
Delaney, Simone Hope
Since the early days of conquest, Black, Indigenous, and Afro-Indigenous peoples of the Lower Mississippi River Delta have survived recurrent processes of settler colonial un-worlding by re-worlding sovereign lifeways rooted in reciprocal relationships to other colonized peoples and the environment. Un-worlding occurred to Black and Indigenous peoples through dispossession of land, capture into enslavement, and genocide. This process was intertwined with the un-worlding of the landscape’s agency, which was captured and enclosed into property by arresting waterways’ movements through constrictive engineering using coercive labor. In the Bas de Fleuve swamps (today known as the Louisiana Central Wetlands), self-emancipated fugitives that had escaped enslavement formed autonomous inner worlds in the unenclosed territories between the Mississippi River and Lake Borgne. Known as Maroons, they were led by a leader named Juan San Malò and forged interdependent networks that extended to Indigenous settlements, enslaved Africans on plantations, and free Blacks in New Orleans. By living outside eurosettler logics of property and re-establishing reciprocity with the more-than-human web of life, they demonstrated that the liberation of captive people is bound to the liberation of captive landscapes. Their re-worlding was also reminiscent of the pan-African trickster figure: anarchistic heroes that overturn the dominant oppressive world order for more liberatory realities Today, the destruction of wetlands across Southeast Louisiana means that descendants are facing an un-worlding of the sovereign livelihoods their ancestors re-established generations before. This is due to anthropogenically induced land loss, flooding, storm surge, and saltwater intrusion influenced by extractivist industries. Through revolutionary recall, reclaiming the logics of re-worlding established by Juan San Malò’s band of Maroons offers pathways to resist the intensifying threats of climate change that represent afterlives of slavery. Common Ground Relief is one collective that has drawn from Maroon legacies to lead bottom-up disaster response, mutual aid initiatives, and citizen-led wetland restoration. Drawing from creative land reclamation projects led by Utē Petit, Monique Verdin, the Nanih Bvlbancha Builders, and the Descendants Project, a constellation of small, site-specific projects are also presented to demonstrate how revolutionary recall can become a form of speculation for broader land-based liberation in the Lower Mississippi Delta.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Mass Timber Adoption in Greater Boston, Massachusetts: A Practical Study for Local Real Estate Developers</title>
<link href="https://hdl.handle.net/1721.1/162135" rel="alternate"/>
<author>
<name>Cerny, Faith W.</name>
</author>
<id>https://hdl.handle.net/1721.1/162135</id>
<updated>2025-07-30T03:07:35Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Accelerating Mass Timber Adoption in Greater Boston, Massachusetts: A Practical Study for Local Real Estate Developers
Cerny, Faith W.
Today’s real estate development strategy must incorporate decarbonization to mitigate the built environment’s detrimental impact on climate change. Beyond required climate action, developments are increasingly seen as responsible for improving occupant health and wellbeing. Furthermore, industry stakeholders are tasked with efficiently delivering sustainable, high quality, and affordable housing in dense, urban areas to meet a growing demand. As the stakes intensify and demands of real estate development increase, projects face multiple barriers to implementation. This thesis explores mass timber construction as a viable solution to modern development challenges. While research content derives from multiple geographies within North America, a particular focus on the relevance and utility for Greater Boston, MA, USA is maintained. The thesis comprises five chapters. Following an introduction, the second chapter provides an overview of mass timber as an evolving building technology with an emphasis on how and why it is gaining momentum as a viable and preferred alternative to traditional building materials. The section conversely discusses commonly cited drawbacks delaying industry acceptance. The third chapter explores mass timber adoption at multiple scales, including studies of innovative projects proving achievement of development objectives despite challenges. Guided by insights from interviews, this chapter discusses stakeholders’ current understanding of the material and motivations for its use, perceived feasibility constraints as well as believed opportunities for its incorporation and proliferation, with a focus on Greater Boston. The fourth chapter considers methods to accelerate the rate of mass timber adoption, including facilitation of local development strategy. The section builds on research and interview findings to establish key considerations when evaluating a mass timber project and to propose an analytical framework for real estate developers to holistically assess the value of incorporating the material in their projects. The concluding chapter speculates the local arc of adoption and subsequent impacts of widespread mass timber project implementation for the city and region.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Micromobility in New York City: An Examination of Vehicle Type Use and User Behavior in Protected Bicycle Facilities</title>
<link href="https://hdl.handle.net/1721.1/162134" rel="alternate"/>
<author>
<name>Boeri, Jake</name>
</author>
<id>https://hdl.handle.net/1721.1/162134</id>
<updated>2025-07-30T03:06:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Understanding Micromobility in New York City: An Examination of Vehicle Type Use and User Behavior in Protected Bicycle Facilities
Boeri, Jake
A shift towards the use of micromobility vehicles (MMVs), specifically motorized two-wheeled vehicles in urban mobility networks, has gained significant attention over the past decade. Many have commented on a perceived increase in MMV use in New York City (NYC) in particular, a trend that appears to have accelerated in the wake of the COVID-19 pandemic and in response to the expansion of high-quality bicycle facilities across the city. However, the extent to which different types of MMVs are used and related rider behavior is poorly understood, forcing policymakers, planners, elected officials, and community members to develop policies and infrastructure with inadequate information. Through direct observation of 9,629 vehicles across five locations, this thesis provides a degree of ground truth and an initial understanding of the prevalence of different MMV types used in protected bicycle facilities in NYC and related user behavior, including commercial application of these vehicles, helmet use, and passenger presence. The findings of this study point to a surprisingly high use rate of motorized MMVs in protected bicycle facilities in NYC, with motorized vehicles comprising nearly three-quarters (73.96%) of all vehicles observed. E-bikes were the largest class of vehicles observed (63.85%), followed by conventional, non-motorized bicycles (25.76%), e-scooters (6.69%), and mopeds (1.96%). Commercial-use vehicles made up nearly one-quarter (23.20%) of observations. A very small proportion of observations were cargo vehicles (2.89%), indicating their limited use for both personal and commercial purposes. Users were significantly more likely to wear a helmet when using a non-motorized vehicle than a motorized one, with helmet use varying substantially across vehicle classes. Modal split of MMV types, commercial use, and cargo vehicle use varied by both location and time of day, pointing to uneven distribution across the mobility network. There were substantial differences between the manual count from this study and automated bicycle counts generated by the New York City Department of Transportation over the same period, indicating a systemic undercounting of MMV use by the automated count system. In response to these findings, a series of recommendations are provided for how NYC and other cities with both developed and developing MMV networks can promote and guide safe, equitable, and sustainable mode shift as micromobility use expands. These&#13;
proposals include policy and spatial planning improvements that should be part of a response to widespread MMV adoption, and the ongoing transformation of how protected bicycle facilities are used.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Housing in European Metropolises: supply dynamics and planning frameworks in large Urban Areas of the EU</title>
<link href="https://hdl.handle.net/1721.1/162133" rel="alternate"/>
<author>
<name>Berra Sandin, Mikel</name>
</author>
<id>https://hdl.handle.net/1721.1/162133</id>
<updated>2025-07-30T03:07:33Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Housing in European Metropolises: supply dynamics and planning frameworks in large Urban Areas of the EU
Berra Sandin, Mikel
Europe’s housing affordability crisis presents significant territorial challenges, particularly as housing demand increasingly spills over from inner cities to surrounding municipalities at the metropolitan scale. This study addresses key policy questions regarding the coordination of housing supply and planning instruments in large urban areas of the European Union. &#13;
Focusing on 23 large Functional Urban Areas (FUAs), the research follows a three part approach: a quantitative analysis of municipal-level housing production and demographic growth between 2011 and 2021 based on Census data; an analysis of the effects of housing supply on housing prices; and an AI-powered quantitative examination of urban plans, at municipal, metropolitan, and regional scales to observe whether they establish housing supply goals. This methodology generates evidence on the spatial dynamics of housing development, by creating an EU-wide database at municipal granularity, while providing a novel focus and analytical approach to institutional urban plans as drivers of housing supply.&#13;
Findings prove mixed alignments between housing supply and demographic growth, with Southern and coastal urban areas falling short on housing supply. In most cases, there is a pronounced metropolitan effect, where peripheral municipalities experience larger housing and population growth. When analyzing the plans, more frequent planning relates to larger housing provision. In addition, the research highlights that housing goals are usually determined at local plans, showing a mismatch between planning efforts and housing dynamics, which tend to be metropolitan or regional. Therefore, the research deepens the understanding of European housing provision and the planning of urban territories, highlighting the need for stronger housing policy mechanisms at the metropolitan level.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Miles Matter: Demographics, Distance, and Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/162132" rel="alternate"/>
<author>
<name>El-Sisi, Kareem H.</name>
</author>
<id>https://hdl.handle.net/1721.1/162132</id>
<updated>2025-07-30T03:07:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Miles Matter: Demographics, Distance, and Decision-Making
El-Sisi, Kareem H.
In this thesis, I investigate which variables have the strongest influence on an individual's travel mode choice depending on the purpose and level of urgency (leisure, essential, emergency) of the trip. I analyze the relationship between spatiotemporal costs conditioned by demographic segmentation using data on population mobility patterns in auto-centric Los Angeles and multimodal New York City. Through a synergistic three-pronged methodology consisting of spatial (time and distance analysis complemented by a spatial interaction model), statistical (multinomial logistic regression model), and machine learning-based (graph neural networks and extreme gradient boosting) analysis, I explore the multifaceted nature of decision-making processes in different urban environments. The hidden patterns revealed by artificial intelligence show that distance is the key determinant of mode choice, depending on the urban form of the city and its adaptation to multimodality.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Producing a Black Oeuvre: Narratives of Black Grassroots Cultural Organizing in Boston</title>
<link href="https://hdl.handle.net/1721.1/162131" rel="alternate"/>
<author>
<name>Hunsen, Alula</name>
</author>
<id>https://hdl.handle.net/1721.1/162131</id>
<updated>2025-07-30T03:07:24Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Producing a Black Oeuvre: Narratives of Black Grassroots Cultural Organizing in Boston
Hunsen, Alula
Amidst a bevy of nonprofits and governmental actors that support and facilitate cultural and aesthetic production in the City of Boston, a vanguard of Black artists and cultural organizers are developing structures and organizations to help local members of Boston’s Black communities steer their own cultural production. This thesis develops an understanding of actions being taken by these organizers and organizations through interviews, and builds a set of participatory action research frameworks by partnering with these organizations (specifically: Thrill, Black Cotton Club, and 5Thou), to conduct further research as to how Black Bostonians can continue to self-determine in the realms of arts and culture. Drawing from a lineage most directly traceable to the Black Arts Movement of the late 1960s, and to hip-hop cultural production in ensuing decades, these organizers are furthering Black-led, community-controlled arts, and fostering community-building. Borrowing theorist Henri Lefebvre’s conception and declaration of a right to creative expression and participation, characterized as oeuvre and as a critical aspect of a “right to the city,” I hypothesized that these actions toward cultural self-determination could be seen as the establishment of a Black oeuvre. This assertion was expanded upon by research partners, to include a broader array of strategies and conceptual frameworks for producing Black place, community, and culture in Boston.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Envisioning Regional Futures in Southeast Los Angeles: Understanding Barriers to Implementing Transit-Oriented Communities along the Forthcoming Southeast Gateway Line</title>
<link href="https://hdl.handle.net/1721.1/162130" rel="alternate"/>
<author>
<name>Martinez, Alejandra A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162130</id>
<updated>2025-07-30T03:07:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Envisioning Regional Futures in Southeast Los Angeles: Understanding Barriers to Implementing Transit-Oriented Communities along the Forthcoming Southeast Gateway Line
Martinez, Alejandra A.
The first 14.5-mile phase of the Southeast Gateway Line (SEGL), a planned light rail project through Southeast Los Angeles and the Gateway Cities region, is expected to be completed by 2035. The rail line aims to improve transit access while being complemented by a regional planning framework and station area planning that seeks to promote transit-oriented communities around station areas and drive equitable community development along the corridor. However, it remains uncertain whether the frameworks and governing bodies responsible for implementing the rail project, including the Los Angeles County Metropolitan Transportation Authority (LA Metro), the Gateway Cities Council of Governments (GCCOG), and cities along the corridor, will effectively align the transit investment with these land use and development goals.&#13;
&#13;
Given these uncertainties, this thesis focuses on the Southeast Los Angeles (SELA) subregion, where a history of structural challenges underscores both the urgency and the complexity of realizing visions for transit-oriented communities tied to the forthcoming rail investment. Drawing on semi-structured interviews with LA Metro and GCCOG staff, along with officials and staff from cities hosting future stations, this research explores the emerging political, economic, and structural barriers to implementing transit-oriented land use around two future SEGL stations: Florence/Salt Lake Station in Huntington Park and Firestone Station in South Gate, both stations of which have multi-jurisdictional spheres of influence. This thesis also proposes a collaborative framework that encourages SELA stakeholders to engage in incremental, low-stakes planning and establish accountability mechanisms before the rail arrives, laying the foundation for sustained stewardship over the vision of transit-oriented communities and broader equitable community development goals throughout the rail's lifespan.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Path Forward: Gentrification Management Strategies in Rural Trail-Based Outdoor Recreation Economies</title>
<link href="https://hdl.handle.net/1721.1/162127" rel="alternate"/>
<author>
<name>Smith, Mistaya</name>
</author>
<id>https://hdl.handle.net/1721.1/162127</id>
<updated>2025-07-30T03:07:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">The Path Forward: Gentrification Management Strategies in Rural Trail-Based Outdoor Recreation Economies
Smith, Mistaya
Rural communities in the United States face economic challenges due to a combination of factors including the decline of the extractive sector, the departure of manufacturing, the agglomeration of farmland, and the regionalization of key public services. To some policymakers, this economic decline, in combination with the nation’s rural-urban political stratification, serves as reason to further abandon rurality and promote migration to urban areas. These policies overlook the interdependence between rural and urban ecosystems and ignore rural America’s unique assets. In capitalizing on rurality’s existing natural beauty and land access, the trail-based outdoor recreation economy functions as a form of asset-based economic development in rural communities. In connecting recreators to the land, serving as the setting of social connection, and creating place-based connections across time, trails further benefit rural communities through the construction of place attachment. Investment in trails as a form of economic development, however, commodifies nature so as to attract external interest in rural places. Externally-driven population increases and wealth influxes in rural communities can cause physical gentrification in the form of rising property values and resident displacement. This gentrification process also contains a cultural component as the commodification of nature and the demographic shift in rural places erodes place attachment between longtime residents and the land through the displacement of local place-based knowledge, changes in traditional land access, and disruption to recreational use patterns. Research suggests that those with deeper place attachments exhibit greater civic engagement, a deeper sense of community and belonging, and more care for their community and environment. Therefore, cultural gentrification can also lead to a decline in community care and a risk to rural vitality. This thesis examines five rural Northeastern towns with trail-based outdoor recreation economies to discern how each community approaches the risks of physical and cultural gentrification.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wildfire Risk Management for Informal Settlements in Chile</title>
<link href="https://hdl.handle.net/1721.1/162126" rel="alternate"/>
<author>
<name>Sakai, Yuri</name>
</author>
<id>https://hdl.handle.net/1721.1/162126</id>
<updated>2025-07-30T03:07:29Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Wildfire Risk Management for Informal Settlements in Chile
Sakai, Yuri
This thesis explores the critical intersection of wildfire risk and informal settlement development in Chile, focusing on the municipality of Viña del Mar. This city experienced the deadliest wildfires in the nation’s history in 2024 and holds the nation’s highest concentration&#13;
of informal settlements. Despite this double vulnerability, the city has inadequately integrated wildfire resilience into its disaster risk management (DRM) framework, creating an urgent&#13;
need for policy reform.&#13;
&#13;
Through combined statistical and geospatial analyses, the author documents informal settlements’ expansion trajectories, especially between 2011 and 2024, and systematically assesses their wildfire exposure. Utilizing unregularized community datasets, wildfire risk classifications, and municipal planning documents, the analyses revealed that the growth of informal settlements outpaces regularization interventions. They also unveiled that all of the informal&#13;
communities in the city, including their wildland-urban interface zones, face significant fire risk.&#13;
&#13;
These findings further led the research to evaluate the current Chilean wildfire governance under Law 21.364 (enacted 2021) to provide comprehensive DRM across national, regional, and municipal administrative levels. Additionally, the study examines the disaster response mechanisms for the 2024 Chile Wildfires. This policy and evidence-based analyses identify inherent still reactive approaches to disasters even 4 years after the policy transition, and reveal a systematic marginalization of informal settlements.&#13;
&#13;
Based on these findings, the research culminates in phase-specific actionable policy recommendations addressing the compound vulnerabilities of informal communities through: 1) enhanced shelter capacity estimation methodologies; 2) formalized private sector involvement; 3) integrated tsunami-wildfire warning systems; 4) periodic intergovernmental learning opportunities; and 5) technical support in reconstruction. Given the 2024 tragedy and Chile’s transition toward comprehensive DRM, these interventions are particularly crucial to accelerate its transition and establishment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Still Working: Re-examining America’s Urban Working Waterfronts</title>
<link href="https://hdl.handle.net/1721.1/162124" rel="alternate"/>
<author>
<name>Zhang, Mabelle</name>
</author>
<id>https://hdl.handle.net/1721.1/162124</id>
<updated>2025-07-30T03:07:18Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Still Working: Re-examining America’s Urban Working Waterfronts
Zhang, Mabelle
While American urban waterfronts once served as critical sites of production, they are now disappearing, reflecting larger de-industrialization trends. This thesis argues for a critical re-examination of the continued and evolving role that waterfronts play as sites of work. It expands the definition of urban working waterfronts to include sites of industry, production, and economic activity, thereby aligning with these sites’ historic and ongoing uses. &#13;
&#13;
This thesis examines four working waterfronts in the Northeastern United States, a region with over 400 years of urban development driven by and around its waterfronts. This thesis examines this through four case studies: Central Waterfront in Portland, ME; Waterfront District in New Bedford, MA; Waterfront at Port Morris, NY; and Waterfront at Sunset Park, NY. &#13;
&#13;
Through analyzing these cases, this thesis proposes a typology of working waterfronts :the Traditional Working Waterfront, the Industrial Working Waterfront, and the Hybrid Working Waterfront—based on key differences in uses, forms, and governance. &#13;
&#13;
This thesis argues that the central issue is not merely protecting working waterfronts, but understanding how they are adapting to new realities. State and community-driven protections through zoning help protect existing working waterfronts, however; these sites are not stagnant relics of historic working waterfronts—rather, they are ever-evolving in response to new economic realities through incorporating new industries, technologies, and public access into their sites.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pedestrian Accessibility and Individual’s Subjective Happiness</title>
<link href="https://hdl.handle.net/1721.1/162123" rel="alternate"/>
<author>
<name>Shikida, Aika</name>
</author>
<id>https://hdl.handle.net/1721.1/162123</id>
<updated>2025-08-25T18:54:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Pedestrian Accessibility and Individual’s Subjective Happiness
Shikida, Aika
Cities in many countries are taking steps to use happiness as a formal policy measure of well-being, in addition to more commonly used economic indicators such as Gross Domestic Product. Economists and public policy and public health scholars have researched the factors that are associated with happiness, linking higher self-reported happiness outcomes with financial status, gender, social interactions, personal health, and sense of security. However, the link between happiness and the built environment around one’s home or workplace has been understudied and remains poorly understood. While location quality — particularly pedestrian accessibility to commercial, recreational, institutional, educational, and transportation facilities — is known to affect home location values, how the same set of location attributes that affect housing prices may have a relationship with happiness remains unclear. In theory, more convenient home locations offer individuals the capacity for independent living (e.g., walking access to destinations), social interactions (e.g., chance encounters with community members), and a sense of belonging (e.g., through self-sufficient neighborhood amenities) — qualities that should also contribute to happiness. This thesis reports on an exploratory analysis of location quality and self-reported happiness in the United States and Japan. Using a customized pedestrian accessibility metric, this thesis examines how access to daily destinations is related to individuals’ subjective happiness, controlling for socio-demographic variables. In the U.S. data, we found that people living in areas with higher pedestrian accessibility to destinations were not necessarily more likely to report being happier, on average. In fact, there was a small tendency for individuals in these areas to report slightly lower happiness levels, on average, after accounting for other influences such as age, income, and marital status. Note that the relationship between pedestrian accessibility and happiness may be more complex than expected and may involve other factors (e.g., presence or absence of greenery). We conducted an additional analysis by dividing the Census tracts into two groups based on population density. In areas with lower population density, the relationship between pedestrian accessibility and happiness remained negative and statistically significant and showed the same strength as the overall analysis. For Nagasaki, Japan, there was not a statistically significant relationship between happiness and pedestrian accessibility, but this might be due to a problem in the street network data, so further investigation is required. In addition, a qualitative analysis of Nagasaki reveals that residents report that problems with the walking environment (e.g., narrow sidewalks, slopes and stairs, darkness at night, road surface differences, distance to facilities) influence their travel behavior and happiness. Nevertheless, although the results of this thesis have limitations, as described above, promoting pedestrian accessibility should remain an important consideration for policy makers when setting public policy goals, since pedestrian accessibility could, for instance, lead to improved physical and mental health, as well as other benefits. For both the U.S. and Japan, future work is necessary to understand the complex experiences of individuals that include spatial, psychological, and environmental factors related to the built walking environment.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Listeria monocytogenes crosses host cell barriers</title>
<link href="https://hdl.handle.net/1721.1/162120" rel="alternate"/>
<author>
<name>Hanna, Ruth</name>
</author>
<id>https://hdl.handle.net/1721.1/162120</id>
<updated>2025-07-30T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">How Listeria monocytogenes crosses host cell barriers
Hanna, Ruth
Listeria monocytogenes is a bacterial pathogen that causes listeriosis, a severe food-borne illness that can cause severe complications and mortality in immunocompromised or pregnant people. Listeria is able to cross several host barriers to cause severe disease, including the intestinal barrier, the blood-brain barrier, and the placental barrier. This is mediated by a diverse range of bacterial factors. In this review, I outline the key host barriers encountered by Listeria during host infection and the mechanisms by which Listeria crosses each barrier.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oakland's Preservation Park: Planning for the Future</title>
<link href="https://hdl.handle.net/1721.1/162117" rel="alternate"/>
<author>
<name>Kaufman, Samantha</name>
</author>
<id>https://hdl.handle.net/1721.1/162117</id>
<updated>2025-07-30T03:07:20Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Oakland's Preservation Park: Planning for the Future
Kaufman, Samantha
Preservation Park in Oakland is an anomaly. It is neither a green park nor strictly an office park, 16 historic homes, carefully renovated and maintained, are arranged around an internal way and studded with a central fountain in the Victorian style. Seeds for this park were initially planted by the city's Landmark Preservation Advisory Board in 1976 and with fits and starts, it opened in 1991.  As Interstate Highway 980 was built, the park was created as a way to save a few of the most beautiful homes under threat by the Oakland Redevelopment Authority's urban renewal clearance and construction of the highways. Interstates 580, 880, and 980 were lashed across Oakland to bring suburban commuters over the bridge to San Francisco, cutting up a city of neighborhoods and destroying thousands of homes and small businesses. Oakland envisioned this acre and a half as a permanent site for community based organizations and non-profits to revitalize the edge of downtown and West Oakland. &#13;
 &#13;
Since 1991, the office space has been rented to tens of non-profits and hosted hundreds of weddings, conferences, and other public and private events. In 2004, the community development corporation, East Bay Asian Local Development Corporation purchased the park from the city and continued to manage the property as a successful office park and event space. The COVID-19 pandemic irrevocably changed how many people work, and for the first time, Preservation Park vacancies increased and have remained substantially below 100%, presenting a challenge to EBALDC and its portfolio. This thesis seeks to provide the client with a framework to assess possible redevelopment and reprogramming schemes which is sensitive to the community goals of EBALDC and requirement for the property to sustain itself. By considering, financial feasibility and partnerships, a multi-phase roadmap with a 20-year time horizon is presented to EBALDC to consider. This will also provide a potential framework for more non-profit firms to pursue commercial real estate management and redevelopment as a strategy for community wealth-building and neighborhood stability.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Governing Care in Astoria, Queens: The Role and Responsibility of the City in Supporting Community-Led Solidarity Networks</title>
<link href="https://hdl.handle.net/1721.1/162116" rel="alternate"/>
<author>
<name>Kleinbock, Yvette</name>
</author>
<id>https://hdl.handle.net/1721.1/162116</id>
<updated>2025-07-30T03:07:06Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Co-Governing Care in Astoria, Queens: The Role and Responsibility of the City in Supporting Community-Led Solidarity Networks
Kleinbock, Yvette
In the spring of 2020, as COVID-19 spread across New York City and the United States, an inadequate government response and an overburdened social safety net left millions facing unemployment, eviction, and food insecurity with limited institutional support. Yet alongside these systemic failures, mass acts of solidarity emerged, as unprecedented numbers of people mobilized mutual aid eff orts to help their neighbors survive. While many mutual aid groups have since disbanded or experienced burnout, others have sustained the work, helping to establish alternative infrastructures of collective care. Taking Astoria, Queens as a case, this thesis examines the political lessons that have emerged in the aftermath of the COVID-19 pandemic, focusing on what it takes to sustain community-led solidarity networks and considering City’s role and responsibility in supporting urban infrastructures of care more broadly. To conceptualize this relationship between local community eff orts and the City, I further consider the possibilities of co-governance as a framework for community care. This research utilizes a community-centered, relational, qualitative approach that draws on oral history and ethnographic traditions, including thematic analysis of key informant interviews, document review, and participant observation. Tracing the trajectory of mutual aid and other community-led eff orts in Astoria and exploring the possibilities and challenges of collaborative governance, this research imagines how planning, policy, and governance strategies in New York City can deepen collective capacity, foster resilience, and advance more just and caring urban futures.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing a Digital Common Application for Affordable Housing in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/162115" rel="alternate"/>
<author>
<name>Moss, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162115</id>
<updated>2025-07-30T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Implementing a Digital Common Application for Affordable Housing in Massachusetts
Moss, Emily
The need for affordable housing in Massachusetts is immense, with fragmented housing application processes further compounding barriers for low-income residents to access stable housing. To address these challenges, the Massachusetts Executive Office of Housing and Livable Communities (EOHLC) initiated the development of a digital common application (Common App) in 2024 to streamline tenant application and selection processes for privately owned publicly subsidized housing opportunities throughout the state. This client-based thesis offers an implementation roadmap for EOHLC to successfully operationalize the Common App within the agency.&#13;
&#13;
The roadmap is structured around three topics as requested by EOHLC: (1) organizational design considerations as the Common App scales, including internal staffing models, external vendor relationship management, and budget planning; (2) long-term technical integration opportunities, including identifying relevant data systems likely to interact with the Common App and potential areas for alignment; and (3) compliance mechanisms to ensure housing providers’ participation in the Common App, including a review of Massachusetts fair housing regulations as one possible strategy to require or incentivize providers to use the platform.&#13;
&#13;
Each topic draws from a review of state policies as well as academic literature in organization studies, information systems, and public administration; stakeholder interviews; and case study research on digital affordable housing search and application platforms in Massachusetts, Detroit, San Francisco, and the Bay Area—culminating in a series of recommendations for EOHLC to effectively administer the Common App over the long term.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ensuring Equitable Tenant Outcomes: Case Studies of Building Decarbonization Initiatives in Greater Boston, Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/162114" rel="alternate"/>
<author>
<name>Wong, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/162114</id>
<updated>2025-07-30T03:07:11Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ensuring Equitable Tenant Outcomes: Case Studies of Building Decarbonization Initiatives in Greater Boston, Massachusetts
Wong, Nicole
U.S. cities are ramping up building decarbonization initiatives to reduce greenhouse gas emissions from buildings. However, these programs and policies generate complex challenges at the intersection of housing, climate, and environmental justice, especially for cities that face barriers to adopting strong renter protections. This thesis offers two case studies regarding tenant-related equity concerns that emerged during the implementation of building decarbonization initiatives in greater Boston, Massachusetts: Boston’s building performance standard the Building Emissions Reduction and Disclosure Ordinance (BERDO) and Everett’s energy efficiency incentive program Electrify Everett. This thesis also identifies strategies that residents, community organizations, and city officials highlight as important to advance building decarbonization without generating unintended consequences for tenants. &#13;
Key equity concerns include the potential impacts of building decarbonization on rental affordability, displacement, and energy burden, whereas strategies include broad tenant protections such as rent control, renter protections attached to building decarbonization subsidies, and robust enforcement mechanisms. This research illuminates the need to build power to win essential tenant protections, focus decarbonization on housing with existing affordability protections, and advance alternative, decommodified forms of housing.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neutronic Performance and Thermal Hydraulic Analysis of the MIT Reactor Fission Converter Experimental Facility Using High-Density U-10Mo Low-Enriched Uranium Fuel Elements</title>
<link href="https://hdl.handle.net/1721.1/162113" rel="alternate"/>
<author>
<name>Sears, Caroline Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/162113</id>
<updated>2025-07-30T03:07:00Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Neutronic Performance and Thermal Hydraulic Analysis of the MIT Reactor Fission Converter Experimental Facility Using High-Density U-10Mo Low-Enriched Uranium Fuel Elements
Sears, Caroline Julia
The MITR fission converter (FC) is a core-driven subcritical assembly at the MIT Nuclear Reactor Laboratory, located on the MIT campus in Cambridge, MA. The assembly is made of eleven partially-depleted MITR-II fuel elements in a separate cooling tank attached to the side of the core-tank graphite reflector. The FC serves to boost the thermal flux from the core and send a hardened neutron spectrum to an irradiation target, providing a fission energy flux spectrum without the need to put a sample inside the core tank. It was previously used for boron-neutron capture therapy clinical trials before its decommissioning in the 2010s. Recently, it has been modified from a medical beamline to a general-use engineering and materials testing facility. The new FC-based experimental facility has roughly one cubic meter of empty space downstream intended to contain large experiments, called the m³. This work is a safety and performance study aimed at quantifying the impact of modifying the facility’s geometry as part of the FC’s recommissioning, as well as the impact of changing its fuel from HEU to LEU fuel as part of the MITR LEU conversion project. Neutronics and thermal hydraulics analysis of the renovated facility have been performed using the codes MCNP5 and STAT7, respectively. This analysis quantified the FC’s k_eff, power distribution, multi-group neutron flux, and conditions which cause onset of nucleate boiling (ONB). It was determined that the FC assembly will remain subcritical (k&#13;
_eff &lt; 0.9) and low power (≤200 kW) under a wide range of performance conditions, including with both types of fuel and a variety of materials on the target-side of the FC tank. The HEU-fueled FC is expected to require no changes to the limiting safety system settings (LSSS) outlined in the original technical specifications document. The LEU fuel is expected to increase the FC performance, but as a tradeoff, will require minor changes to the LSSS setpoints to maintain margin to ONB under the most limiting thermal-hydraulic conditions. Additionally, this study evaluates the feasibility of using the FC for in-assembly fuel experiments, particularly as a pathway for testing the new LEU fuel elements at low power. This study indicated that this proposed FC configuration with one LEU and ten HEU elements is feasible and maintains wide safety margins.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Supervisory Control System as a Transition Technology Towards Autonomous Reactor Plant Operations</title>
<link href="https://hdl.handle.net/1721.1/162112" rel="alternate"/>
<author>
<name>Fortier, Lauren G.</name>
</author>
<id>https://hdl.handle.net/1721.1/162112</id>
<updated>2025-07-30T03:06:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Development of a Supervisory Control System as a Transition Technology Towards Autonomous Reactor Plant Operations
Fortier, Lauren G.
The economic viability of small and microreactors depends on reducing energy generation costs. The implementation of autonomous reactor control systems provides an avenue for reducing operations and maintenance expenses. Advanced reactor designs with enhanced passive safety features, reduced source terms, and digital instrumentation and control systems, directly support autonomous controllers. In these plants, where the need for human operators is already reduced, the introduction of supervisory control systems (SCS) for dynamic operations further lessens operator dependence while building trust in these systems, laying a solid foundation for the transition to fully autonomous reactor control.  &#13;
&#13;
Finite state automata (FSA) provide a framework for engineering fully verifiable and validatable supervisory controllers, and thereby facilitate the transformation to autonomous operations in nuclear power plant operations. FSA serve as a foundational mathematical tool for modeling discrete event systems (DES). Properties such as nonblocking and controllability can be formally demonstrated and verified by leveraging the extensive set of mathematical proofs within the scope of regular languages. Furthermore, a DES can be directly linked to reactor plant systems and operational procedures within a hierarchical architecture by using a graded functionalization approach analogous to that of complex dynamic systems, such as self-driving vehicles. In this scheme, feedback controllers can regulate low-level actuation functions while a supervisory controller can govern high-level plant state transitions. &#13;
&#13;
A generic supervisory controller was developed as a transition technology toward autonomous reactor operations. This controller was then tailored for application on a limited feedback model, for initial proof-of-concept testing, and then was scaled for use on light water reactor (LWR) simulators. In the absence of advanced reactor simulators for operational testing, LWR simulators were used because they provide realistic feedback and controls within a more conservative operating margin than advanced reactors. These supervisory controllers successfully executed operational procedures within a fully verifiable framework, establishing the foundation of this modeling approach and laying the groundwork for its implementation in advanced reactor designs. This scalable model thus facilitates a smooth transition from functioning as an operator aid to fully autonomous operation as a comprehensive plant controller, increasing the economic viability of nuclear power.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical Water, Wastewater, and Thermal Infrastructure Development for a Resilient Neighborhood in War-Affected Ukraine</title>
<link href="https://hdl.handle.net/1721.1/162106" rel="alternate"/>
<author>
<name>Gendler, Isaac A.</name>
</author>
<id>https://hdl.handle.net/1721.1/162106</id>
<updated>2025-07-30T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Critical Water, Wastewater, and Thermal Infrastructure Development for a Resilient Neighborhood in War-Affected Ukraine
Gendler, Isaac A.
The Central Ukrainian municipality of Tetiiv is experiencing an influx of migrants due to its relatively safe position amid the Russian invasion. Tetiiv, in collaboration with the Ukrainian NGO Vid Sertsya Budova, is building a new neighborhood to accommodate internally displaced people, refugees, war veterans, and local residents. The neighborhood will require water, wastewater, and thermal infrastructure that satisfies European Union requirements given Ukraine’s ambition to join the economic bloc. This thesis performs a pre-feasibility study to help Tetiiv and Vid Sertsya Budova create an optimal configuration of water, wastewater, and thermal infrastructure for the new neighborhood. For water infrastructure, the report calculates water consumption using the BREEAM framework, quantifies storage requirements, analyzes water quality, estimates rainwater harvesting potential, and identifies optimal water source locations within 30 km using the DRASTIC methodology combined with geospatial analysis. For wastewater infrastructure, the study estimated wastewater generation, analyzed different wastewater treatment options, and used a decision matrix to identify the most optimal wastewater system for the site, a moving bed biofilm reactor system. The thermal infrastructure study developed a conceptual heating system for the new neighborhood, incorporating ground-source heat pumps in each row house and single-family home, vertical boreholes, a thermal energy network, and a wastewater heating system for the multifamily co-living units. This study offers a blueprint for Ukraine and other regions recovering from urbicidal conflict and disaster to rebuild in alignment with the new climate paradigm.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Affect in Resiliency Planning: A Conversation with Broad Channel</title>
<link href="https://hdl.handle.net/1721.1/162105" rel="alternate"/>
<author>
<name>Fiol, Olivia</name>
</author>
<id>https://hdl.handle.net/1721.1/162105</id>
<updated>2025-07-30T03:06:39Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Affect in Resiliency Planning: A Conversation with Broad Channel
Fiol, Olivia
Planning for climate change is more relevant than ever, as the earth continues to warm, sea levels rise, and no global policy or political will is in sight. In order to plan under hostile circumstances, it is of the utmost importance that planners turn our attention to the hyper-local scale, continuing momentum in our personal and professional relationships. In this thesis, I argue that centering affective experiences of place is essential in conversations about the future of places under climate change, especially in communities and neighborhoods resistant to the conversation about climate change’s impacts on their futures in the first place. This project focuses on Broad Channel, the only inhabited island community in New York City’s Jamaica Bay, which is on the front lines of sea level rise and tidal flooding in the city. I interviewed city leaders, community members, artists, planners, and activists to understand how we can move through and with affect when considering the future of a place. This can open up conversation about climate change previously inaccessible. These conversations also surfaced the need for planners to regroup and understand how their own affective positions impact difficult conversations about climate change. I offer these insights and recommendations for future resiliency planning work, reflecting both inward and outward.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Between Fields and Cities: The Politics of Land Use Changes in Punjab, India</title>
<link href="https://hdl.handle.net/1721.1/162104" rel="alternate"/>
<author>
<name>Kodzis, Trevor Quigley</name>
</author>
<id>https://hdl.handle.net/1721.1/162104</id>
<updated>2025-07-30T03:08:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Between Fields and Cities: The Politics of Land Use Changes in Punjab, India
Kodzis, Trevor Quigley
This thesis examines the urbanization of agricultural lands in the State of Punjab, looking for patterns that explain the type of development that is occurring while embedding these transformations in a larger political and economic context. The study will focus on both transportation infrastructure and the real estate developments surrounding it, as a way of situating Punjab within a larger discourse on infrastructure and urbanization in the Global South. Through the case studies of three Punjabi cities: Mohali, Bathinda, and Ludhiana, this paper will employ remote sensing to analyze recent transformations from agricultural to developed land across different land use zones, revealing two primary patterns. First, highway infrastructure projects have been delayed because of land acquisition problems and a contentious political environment. Second, with the exception of Ludhiana, most of the real estate in Punjab is concentrated in the residential sector. This apparent stagnation of manufacturing growth in Punjab results from a wide range of political and economic factors including high land prices, protest movements, emigration, fiscal policies, geography, and competition with other states. In contrast to the rest of the state, Ludhiana has successfully attracted industrial growth, illustrating how cities that urbanized earlier follow a different path of economic development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ozarkitecture: Shaping the Sense of a Region</title>
<link href="https://hdl.handle.net/1721.1/162103" rel="alternate"/>
<author>
<name>Jones, Rubin</name>
</author>
<id>https://hdl.handle.net/1721.1/162103</id>
<updated>2025-07-30T03:08:15Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Ozarkitecture: Shaping the Sense of a Region
Jones, Rubin
Contemporary planning often invokes a “sense of place,” yet the deeper work of placemaking remains largely unfulfilled. In its absence, cities and regions fracture into landscapes that appear whole but feel hollow. These are spaces stripped of the sensory depth and symbolic meaning that make dwelling possible. This thesis thus returns to the concept of the genius loci—the spirit of place—not as a nostalgic embellishment, but as an ethical and practical imperative. It traces the philosophical and historical foundations of place, examines how contemporary practice has diluted its meaning, and explains why a new approach is necessary. From this foundation, the project engages Kevin Lynch’s operational models and develops a reframed approach—shifting from a visual image to an embodied experience—to ground planning practice in the textures of memory, movement, and belonging. Five new concepts—anchor, patch, joint, seam, and trail—offer a vocabulary for cultivating places that hold meaning across time and transformation. This framework is applied in Northwest Arkansas, a region where rapid growth threatens to outpace the character of its communities. By strengthening sensory experience, rooted memory, and collective authorship, this project aims to offer a different way forward through regional transit—where planning not only shapes space, but safeguards access to the ongoing, unfinished project of place itself.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Identity and Place: The Role of Displacement Camps in Community Rebuilding and Identity Preservation in Sudan</title>
<link href="https://hdl.handle.net/1721.1/162102" rel="alternate"/>
<author>
<name>Sati, Maysaa</name>
</author>
<id>https://hdl.handle.net/1721.1/162102</id>
<updated>2025-08-06T18:54:21Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Navigating Identity and Place: The Role of Displacement Camps in Community Rebuilding and Identity Preservation in Sudan
Sati, Maysaa
Displacement camps are often framed as zones of impermanence; spaces of waiting designed to contain crises, not cultivate futures. Yet, in Kalma Camp in South Darfur, displacement has given rise to a self-organized, complex urban environment shaped by collective labor, cultural resilience, and everyday acts of spatial and political agency. This thesis explores how communities in Kalma have remade space, redefined home, and preserved identity in the face of prolonged uncertainty. Drawing on ethnographic fieldwork, spatial analysis, and critical urban theory, it situates Kalma not as an exception, but as a generative urban formation—an emergent city born from the margins.&#13;
Through chapters that trace the camp’s spatial evolution, intergenerational understandings of belonging, informal governance, cultural production, and political expression, this research challenges dominant humanitarian paradigms that treat camps as temporary and peripheral. It argues that residents are not passive recipients of aid, but planners, builders, and cultural producers who contest displacement through care, memory, and infrastructure. By threading together theoretical insights from scholars such as Malkki, Bhabha, Roy, and Simone with grounded narratives from Kalma, the study reveals how displacement can also be a site of urban possibility.&#13;
In reframing camps like Kalma as sites of urban life, not despite the crisis, but through it, this thesis calls for a fundamental shift in how urban planners, humanitarian actors, and scholars engage with protracted displacement. It invites us to see resilience as planning, care as governance, and the camp not as a space of suspension, but as a place where new urban futures are already being forged.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterization of a Diamond Proton Recoil Telescope for DT Neutron Measurements in the LIBRA Experiment</title>
<link href="https://hdl.handle.net/1721.1/162092" rel="alternate"/>
<author>
<name>Edwards, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/162092</id>
<updated>2025-07-30T03:08:09Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Characterization of a Diamond Proton Recoil Telescope for DT Neutron Measurements in the LIBRA Experiment
Edwards, Emily
The LIBRA project investigates tritium breeding using beam-target style DT neutron generators to irradiate molten salt vessels. A critical aspect of understanding this process is the characterization of the energy and flux anisotropies within the neutron environment, which are inherent to the beam-target neutron generation method. These spectral and flux characteristics directly impact tritium production and the interpretation of experimental results, which makes the neutron field characterization essential for a complete understanding of the tritium breeding system. This paper presents the use of an sCVD diamond detector and an sCVD diamond proton recoil telescope to characterize the neutron environment produced by the DT neutron generator employed in the LIBRA experiments. The results of these measurements provide insight into the neutron flux and energy distributions incident on the breeding salt, enabling a more complete understanding of the neutron input in the LIBRA experimental tritium breeding process.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contested Values of Eco-Developments: Leveraging Private Finance to Integrate Biodiversity into Nusantara’s City Development Framework</title>
<link href="https://hdl.handle.net/1721.1/162090" rel="alternate"/>
<author>
<name>Leung, Yu Hang (Hannah)</name>
</author>
<id>https://hdl.handle.net/1721.1/162090</id>
<updated>2025-07-30T03:08:08Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Contested Values of Eco-Developments: Leveraging Private Finance to Integrate Biodiversity into Nusantara’s City Development Framework
Leung, Yu Hang (Hannah)
Rapid expansion of urban populations has spurred the construction of new cities, contrasted with the heightened urgency to adopt climate risk mitigation and disaster resilience strategies. Along with the global need for Nature-based Solutions (NbS), new eco- developments which are planned within biodiversity hotspots should adopt resilient climate adaptation strategies for long term benefits. However, these projects are often not financially justified or positioned to sustain long investments and holding periods. This thesis examines development of Ibu Kota Nusantara (IKN) in Indonesia as an evolving eco-development case study on how biodiversity could be repositioned as a key aspect in investment frameworks.&#13;
Developing new cities and eco-developments tend to rely on external investments, as internal structures are navigate the challenges of rapid growth while seeking a self-sustainable equilibrium. For IKN, private investors hesitate to invest in a project that is situated in an unstable political landscape, while low government expenditure and poor governance structures has marred development progress. Based on the inherent need to build to support a growing urban population, this multidisciplinary thesis explores three components that are needed to design an eco-development project - namely consistent way to value biodiversity in comparison to development values, proper environmental governance, and sustainable financial instruments to support the initial and operational expenditures of a project. Measurement approaches such as GBS-FI and S&amp;P NBS are able to streamline corporation’s dependency value of biodiversity, based on  valorization models developed by SEEA-EA and the United Nation’s Integrated National Financing Framework. A mixed-methods approach of qualitative case study analysis and in-depth review of existing and potential financial instruments is used to understand the demand and supply side of eco-developments. A&#13;
Contingent Valuation Method of assessing buyers’ Willingness-To-Pay in addition to qualitative questionnaire on perceived values of biodiversity provides insights on local understanding and WTP of premiums in support of elevated costs of eco-developments. The intention of this research is to explore how biodiversity could be recentered as a foundational element to sustainable development of cities. More broadly, this research seeks to synthesize the interdisciplinary discussions around development, environmental policy and ecological planning, while evaluating the feasibility of innovative financial mechanisms to mobilize capital for large-scale eco-development projects.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Assessment of Digital Age Inclusion: Topic Modeling Seoul’s Digital Governance Platform to Evaluate Elderly Representation</title>
<link href="https://hdl.handle.net/1721.1/162089" rel="alternate"/>
<author>
<name>Lim, Sungmoon</name>
</author>
<id>https://hdl.handle.net/1721.1/162089</id>
<updated>2025-07-30T03:07:42Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Assessment of Digital Age Inclusion: Topic Modeling Seoul’s Digital Governance Platform to Evaluate Elderly Representation
Lim, Sungmoon
This paper examines the intersection of population aging and digital civic government in Seoul, South Korea. As cities worldwide digitize and age simultaneously, understanding elderly citizens' representation in digital governance platforms becomes critical for inclusive urban governance. As a leader in both aging and urban technologization, Seoul serves as an ideal case study. Combining computational analysis of civic queries with qualitative interviews, this study investigates whether elderly residents' concerns are adequately represented in Seoul's e-government platform. Comparing these datasets reveals significant disparities in how elderly concerns are represented digitally: despite Seoul's technological sophistication and digital inclusion efforts, substantial gaps remain in representing elderly citizens' concerns in governance forums, signaling gaps that may undermine age-inclusive development. This research contributes to theoretical understandings of digital democracy and urban aging while offering practical insights for designing more inclusive systems that address the realities of dual urban phenomena—aging and digitization—as they coalesce in cities.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Fluid Dynamics Modeling of Compact&#13;
Steam Generators</title>
<link href="https://hdl.handle.net/1721.1/162088" rel="alternate"/>
<author>
<name>Jiragoontansiri, Witiwat</name>
</author>
<id>https://hdl.handle.net/1721.1/162088</id>
<updated>2025-07-30T03:08:01Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Computational Fluid Dynamics Modeling of Compact&#13;
Steam Generators
Jiragoontansiri, Witiwat
Compact Steam Generators (CSGs) are vital components in Small Modular Reactors (SMRs), particularly within Integral Pressurized Water Reactor (iPWR) configurations where compactness and high performance are essential. This thesis explores the use of Multiphase Computational Fluid Dynamics (M-CFD) to simulate two-phase flow boiling in CSGs based on Printed Circuit Heat Exchanger (PCHE) technology. Using the commercial CFD code STAR-CCM+, two modeling approaches—the Volume of Fluid (VOF) model and the Two-Phase Thermodynamic Equilibrium (TPTE) model—are applied to simulate both adiabatic and heat transfer conditions within mini-channels. The simulations are validated against experimental data from two sources: an R-134a-based vertical test loop developed at MIT’s Greenlab and a water-based PCHE test section from Kromer’s prior work. Key two-phase flow parameters such as void fraction, pressure drop, and heat duty are evaluated and compared to experimental benchmarks. Calibration methodologies are implemented to improve predictive accuracy. The validated models are then used to simulate realistic CSG operating conditions based on Babcock \&amp; Wilcox and NuScale reactor designs. Results indicate that PCHE-based CSGs, despite being smaller, are capable of delivering favorable thermal and hydraulic performance, with slightly better results compared to the existing steam generator design. Overall, the study demonstrates the potential of M-CFD tools to support the design and optimization of CSGs for next-generation nuclear applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Farebox Freedom: An analysis of centralized fare policy interventions relative to the suburbanization of poverty</title>
<link href="https://hdl.handle.net/1721.1/162087" rel="alternate"/>
<author>
<name>Chachra, Vir</name>
</author>
<id>https://hdl.handle.net/1721.1/162087</id>
<updated>2025-07-30T03:07:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Farebox Freedom: An analysis of centralized fare policy interventions relative to the suburbanization of poverty
Chachra, Vir
The United States is witnessing a shift in its geography of poverty, with suburban communities experiencing greater increases in poverty rates relative to urban cores. However, transit service and fare policies have not kept pace with this demographic shift, inadequately meeting the needs of a growing population of lower income riders in the suburbs, particularly those served by higher-cost modes like commuter rail.  &#13;
&#13;
This thesis confronts this evolving dynamic, bridging a research gap between transit fare policy and the suburbanization of poverty, analyzing seven transit systems across the US through a Spatial Difference in Differences research approach, revealing mode specific shifts in transit cost burdens from 2019 to 2021 and impacts of these shifts on social vulnerability as defined by the CDC. The thesis also explores federal policy pathways to create greater fare equity in light of this dynamic, either through supporting operations costs for transit agencies or through a flat-fare national transit pass for riders, akin to Germany's Deutschlandticket (D-ticket) program.&#13;
&#13;
Focusing on suburban commuter rail communities across the sampled networks, the analysis finds that in 2021, communities with only commuter rail access and higher-than-average social vulnerability scores were associated with approx. an 11% additional increase in transit cost burdens compared to all other groups while also experiencing an increase in transit cost burdens overall. Furthermore, a two-fold increase in transit costs as a share of median income in 2021 was correlated with an additional 7.4% rise in social vulnerability index scores for commuter rail communities, relative to those with access to other modes that are closer to the urban core. While these communities have a 38% lower social vulnerability score, the analysis estimated a 60% increase from 2019 to 2021, highlighting a disproportionate increase and challenging the assumption of the wealthy commuter rail suburb.  &#13;
&#13;
This increasing sensitivity to transit cost burdens points to a significant ongoing interaction between national trends of suburbanization of poverty and fare policy. Given that many transit agencies face funding constraints and are nationally inconsistent in their low-income fare programs, they may be structurally limited in their ability to address these disparities on their own. This analysis considers lessons from historical policies such as the National Mass Transportation Assistance Act of 1974 and recent international programs like Germany’s D-ticket, to suggest that federal support for transit operations—paired with inclusive, mode-agnostic fare programs—would help address these emerging inequities in transit affordability amid the suburbanization of poverty.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolving Concepts of the Public Interest in Comprehensive Planning</title>
<link href="https://hdl.handle.net/1721.1/162085" rel="alternate"/>
<author>
<name>Tagliani, Jessie</name>
</author>
<id>https://hdl.handle.net/1721.1/162085</id>
<updated>2025-07-30T03:07:34Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Evolving Concepts of the Public Interest in Comprehensive Planning
Tagliani, Jessie
The public interest is an important, yet contested, concept in the field of planning. On the one hand, it offers a normative criterion against which planning decisions can be evaluated and is traditionally viewed as the source from which planners derive their authority. However, the precise nature of the concept is fiercely debated by both planning practitioners and theorists, with some going so far as to denounce its existence. Today, the increasingly pluralist and complex nature of communities lead to questions over the concept’s relevance and applicability. In the second half of the twentieth century, planning theoreticians began assembling a body of literature surrounding this concept, mostly in the form of typologies of the definitions that have been ascribed to the public interest However, my review of the literature revealed that the study of the public interest as a normative criterion for planning has almost entirely taken place in the realm of planning theory. Therefore, I sought to add to the empirical scholarship concerning the public interest by analyzing it from two angles: first, I sought to understand how the public interest as a historical concept has changed and evolved alongside the field of planning throughout the twentieth century. Second, I chose the field of comprehensive planning as my analytical lens due to its longevity across the history of the planning profession and its close affiliation to the concept of the public interest. Specifically, I sought to analyze how the public interest is manifested in a series of comprehensive plan documents and thereby illustrate how the concept’s operationalization has evolved over the course of the past half century of planning. I began my analysis by drawing on over fifty years of scholarship to construct my own typology of the main definitions of the public interest. I then applied these definitions to four different models of comprehensive planning that were developed between 1962 and 2012. I also obtained a second perspective on the evolution of the concept of public interest by examining a series of comprehensive plans adopted by the City of Annapolis between 1964 and 2022. The two analyses revealed very different trajectories in the evolution of the public interest as an empirical concept. On the one hand, the four models demonstrate a fairly linear evolution in what is constituted to be the substance and process of constituting the public interest, which can be broadly classified as achieving social equity, the responsible stewardship of natural resources, and authentic citizen involvement. By contrast, the five Annapolis comprehensive plans did not neatly follow the same evolution. Instead, a recurring concern for many of the Annapolis plans is the conservation of the physical city through the control of the city’s growth, the careful maintenance of its economy, and the preservation of its urban fabric. However, the more recent plans demonstrate a stronger commitment to the social values and processes espoused by the four planning models, indicating that there is growing consensus in the field of planning today regarding an empirical understanding of the public interest.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prioritizing Sidewalk Accessibility Improvements for the Aging Population and Individuals with Disabilities: A Case Study of Bandung, Indonesia</title>
<link href="https://hdl.handle.net/1721.1/162084" rel="alternate"/>
<author>
<name>Kurniaputri, Aulia</name>
</author>
<id>https://hdl.handle.net/1721.1/162084</id>
<updated>2025-07-30T03:07:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Prioritizing Sidewalk Accessibility Improvements for the Aging Population and Individuals with Disabilities: A Case Study of Bandung, Indonesia
Kurniaputri, Aulia
While walking is fundamental to inclusive urban mobility, major cities in Indonesia continue to face challenges in providing barrier-free pedestrian infrastructure, even for individuals without physical impairments. As the population of older adults in Indonesia continues to grow, the risk of disability within this demographic will increase, contributing to the overall number of individuals with disabilities. In Bandung City, there is a rising awareness across various sectors of society regarding the rights of older adults and individuals with disabilities to navigate sidewalks safely. These trends highlight the importance of improving inclusivity on city streets, where people travel daily to reach their essential and desired destinations.&#13;
&#13;
This thesis explores an evidence-based methodology to prioritize sidewalk accessibility improvements for older adults and individuals with physical disabilities, aiming to develop a prioritization strategy that targets maximum impacts. Accessibility scores and pedestrian flow counts are calculated with the Urban Network Analysis (UNA) toolbox. Three types of user groups—non-disabled individuals, cane or crutch users, and wheelchair users—were assigned penalties for each type of barrier on a sidewalk segment, resulting in varying perceived distances. Those with physical mobility limitations perceived longer distances than those without. To identify priority locations, a system-selection ranking was applied that considered sidewalk segments with both high-frequency usage and significant discrepancies between actual and perceived lengths. The methods outlined in this thesis are scalable for use in other neighborhoods and cities, thereby supporting data-driven decision-making in pedestrian infrastructure improvements.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relationality and Reciprocity in Civic Design: Public Engagement and Offshore Wind Development in the Gulf of Maine</title>
<link href="https://hdl.handle.net/1721.1/162083" rel="alternate"/>
<author>
<name>Bendixen, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/162083</id>
<updated>2025-07-30T03:08:05Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Relationality and Reciprocity in Civic Design: Public Engagement and Offshore Wind Development in the Gulf of Maine
Bendixen, Amanda
Offshore wind projects are inherently complex, requiring the integration of social, environmental and technical planning. Meaningful engagement with communities is critical to ensuring procedural fairness, trust and equity throughout the development process. Yet, the role of civic design in shaping these outcomes remains unexplored. This thesis investigates how relationality and reciprocity are fostered through the civic design of public engagements for offshore wind development in the Gulf of Maine. Through qualitative analysis of public meeting transcripts – using thematic coding and memo writing in Atlas.ti – this study identifies civic design elements and recurring engagement themes. &#13;
&#13;
The findings highlight relational accountability as a mechanism for building trust, transparency and procedural fairness. They also explore how civic design can support reciprocity, while revealing how structural barriers can undermine relationality. This research demonstrates the possibilities and limitations of civic design in fostering relational and reciprocal public engagements. It concludes with recommendations for incorporating civic design elements that promote sustained, reciprocal relationships, accountability and long-term community involvement in offshore wind development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Safety and Surveillance: New Possibilities for Public Light After Dark</title>
<link href="https://hdl.handle.net/1721.1/162082" rel="alternate"/>
<author>
<name>Corlett, Lucy</name>
</author>
<id>https://hdl.handle.net/1721.1/162082</id>
<updated>2025-07-30T03:07:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Beyond Safety and Surveillance: New Possibilities for Public Light After Dark
Corlett, Lucy
As cities refocus planning and design goals in response to evolving global standards for urban well-being, sustainability, and spatial equity, research on best practices and innovative considerations for the public realm has expanded. As a result, a new movement in research and guidance on public light has emerged. Rather than continuing to view lighting as a punitive means of enforcing surveillance and public safety, this movement in research and practice advances radically inclusive, responsive design methods that use light to redress inequality in the built environment. This thesis builds on a growing body of research that establishes the powerful influence of light on human experience and perception, initiating a dialogue between different models for place-based approaches to lighting design in shared public spaces. Drawing on in-depth studies of these models, interviews with stakeholders, scholarship, policy, and design and planning practice, this thesis recommends that city planners serve as the bridge between ideation and implementation in a new era of urban illumination.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Community Benefits Agreements for Equitable Renewable Energy Siting: The Importance of Negotiation Power and Stakeholder Engagement</title>
<link href="https://hdl.handle.net/1721.1/162081" rel="alternate"/>
<author>
<name>Paul, Sanjana</name>
</author>
<id>https://hdl.handle.net/1721.1/162081</id>
<updated>2025-07-30T03:08:02Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Community Benefits Agreements for Equitable Renewable Energy Siting: The Importance of Negotiation Power and Stakeholder Engagement
Paul, Sanjana
As renewable energy development accelerates across the United States, conflicts over project siting have become increasingly common; often rooted not in opposition to clean energy itself, but in concerns over fairness, community inclusion, and long-term accountability. This thesis investigates how Community Benefits Agreements (CBAs) can serve as tools to address these challenges, focusing on how negotiation dynamics, mediation, and stakeholder engagement shape the equity and enforceability of CBAs in renewable energy siting. Using a mixed-methods approach, this research draws on qualitative case studies, stakeholder interviews, and legal-policy analysis, alongside a limited quantitative assessment of CBA implementation outcomes. The study examines both the procedural and structural conditions that influence how benefits are negotiated, formalized, and monitored. By analyzing cases that include third-party facilitation, amendment mechanisms, and diverse stakeholder participation, the thesis identifies best practices for designing CBAs that move beyond performative engagement and toward genuine community empowerment. Ultimately, this research offers a multidimensional understanding of CBAs as emergent governance instruments situated at the intersection of infrastructure planning, environmental justice, and public accountability. It concludes by proposing a model state-level regulatory framework to support equitable CBA development and embed principles of justice into the future of renewable energy siting.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Public Health Governance at the Watershed Scale: Exploring Opportunities for Multi-sector Governance to Advance Planetary Health in Northeastern Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/162080" rel="alternate"/>
<author>
<name>Morales, Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/162080</id>
<updated>2025-07-30T03:07:41Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Public Health Governance at the Watershed Scale: Exploring Opportunities for Multi-sector Governance to Advance Planetary Health in Northeastern Massachusetts
Morales, Daniela
Many health and environmental regulations apply only within specific political or administrative boundaries, creating a mismatch between the spatial scale of natural systems which impact health and the spatial extents of relevant regulations. For example, in Massachusetts, local Boards of Health govern specific public health and environmental issues through spatialized regulatory powers that carry significant weight in both local and larger geopolitical contexts. Despite the fact that watershed management influences regional public health outcomes through impacts to water quality, water quantity, and climate resilience measures, the organizations focused on watershed management do not have influence that matches the power of public health entities. This thesis explores how watershed management decisions could have similar weight to other public health governance decisions by exploring the specific speculative case of what interest there is in, and what barriers there are to, watershed management organizations in Northeastern Massachusetts working as public health governing units, such as local Boards of Health. Using a mixed methods approach, combining organizational and policy analyses with semi-structured key informant interviews and surveys, I assessed the opportunities, barriers and interest for multi-sector watershed and health governance to advance Planetary Health in Northeastern MA. The findings showed low receptiveness towards adopting a new regional governance system due to both perceived and actualized legal, organizational and social barriers. The findings also highlighted an interest towards strengthening existing regional partnerships and building new collaborations across the fields of public health and watershed management for more effective approaches towards environmental health decision making. These results suggest a need for additional interdisciplinary training for both sectors, and the creation of new spaces and relationships for collaboration between actors involved in public health, watershed management, and related fields.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Silence to Sankofa: The Role of Archives in Addressing Urban Renewal’s Displacement History</title>
<link href="https://hdl.handle.net/1721.1/162079" rel="alternate"/>
<author>
<name>Mohamed, Menatalla</name>
</author>
<id>https://hdl.handle.net/1721.1/162079</id>
<updated>2025-07-30T03:07:54Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Silence to Sankofa: The Role of Archives in Addressing Urban Renewal’s Displacement History
Mohamed, Menatalla
In the post-World War II era, urban renewal was designed as a path towards the revitalization of American cities through public investment into the redevelopment of ‘blighted’ areas. Through eminent domain takings, urban renewal projects led to the forced relocation of residents from their homes and neighborhoods, with a disproportionate impact on Black, immigrant, and low-income communities across the country. The archives of the renewal period hold the story of this widespread displacement and are of significant value for contemporary planning practice. Through the lens of two case studies, this thesis explores how and why urban renewal archives are being revisited today to address this displacement history through institutional and community approaches to memorialization. In Cambridge, MA, the Cambridge Redevelopment Authority (CRA) is an example of an agency drawing on its own archive to publicize its role in past forced relocation through its use of eminent domain. In Rochester, NY, Clarissa Uprooted is a public history and community building project centered around the story of Clarissa Street, a historically Black neighborhood that was demolished for renewal in the 1960s. Through document analysis and interviews, I examine how these efforts to activate urban renewal archives and better understand the scope and impact of forced relocation provide avenues for planners and community members to remember the past, acknowledge systemic harms, and reflect on repair. Despite the different positionalities of the CRA and Clarissa Uprooted, a comparative approach also highlights how both organizations have created opportunities to unearth histories of dissent to urban renewal, more fully recognize the legacy of commercial displacement, and imagine avenues to planning, policy, and institutional change. This research demonstrates the significance of local archival initiatives that draw upon the past to better position planners and communities to face the urban challenges and inequities of the present and future.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis and oxidation behavior of Cr alloyed uranium borides at high temperatures</title>
<link href="https://hdl.handle.net/1721.1/162078" rel="alternate"/>
<author>
<name>Moeykens, Riley S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162078</id>
<updated>2025-07-30T03:07:27Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Synthesis and oxidation behavior of Cr alloyed uranium borides at high temperatures
Moeykens, Riley S.
Following the nuclear accident at Fukushima Daiichi Power Station in 2011, an urgent need for safer, more economical, and versatile nuclear fuels has arisen. In recent years, uranium boride (as a tetraboride and diboride) has been further investigated as a candidate fuel form for its high thermal conductivity, high melting point, high uranium loading, and potential for dual use as a fuel and burnable absorber. In this work, the synthesis, structural behavior, and oxidation behavior of uranium borides and chromium- and yttrium- alloyed uranium borides are investigated. The structure of the synthesized uranium borides and chromium- and yttrium- alloyed uranium borides were probed using synchrotron X- ray Powder Diffraction (XRD) and Pair Distribution Function (PDF) analysis with in-situ heating. The methods and challenges in synthesizing uranium boride and chromium- and yttrium-alloyed uranium boride, as well as the consequential thermophysical and oxidation properties of these potential fuel forms, are elucidated in this work.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Semi-Autonomous, Highly Automated, and Remotely Operated (SAHARO) Nuclear Reactors</title>
<link href="https://hdl.handle.net/1721.1/162077" rel="alternate"/>
<author>
<name>Hallinan, Aidan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/162077</id>
<updated>2025-07-30T03:07:51Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Towards Semi-Autonomous, Highly Automated, and Remotely Operated (SAHARO) Nuclear Reactors
Hallinan, Aidan M.
In the United States, comprehensive reactor design certification, site permitting, and operating licensing processes exist to ensure the safe and reliable operation of nuclear power plants (NPPs). Most of these plants have belonged to the same design class: large, centrally located Light Water Reactors (LWRs). Thus, our regulatory processes were tailored for their phenomenology and the unique challenges associated with their operation and maintenance. However, these types of plants may be impractical for specific energy markets, where smaller, non-LWR, highly flexible, and multi-faceted NPPs can be more optimal. The novelty of these designs and their use cases has further inspired new operating paradigms, which will be referred to as Semi-Autonomous, Highly Automated, or Remote Operations (SAHARO) in this thesis. While some of these new reactors have seen limited progress in design certification and licensing efforts under current regulatory practices, there remains little precedent for these novel operating approaches. To facilitate discussion, guide designers, and inspire regulatory progress, I begin by looking at existing regulations, licensing practices, technical guidelines, and other rules that govern the NPP design and operations. I then dive into current applications and discussions of the sub-components of SAHARO, across different technical domains as well as nuclear power, to gather technical, operational, and regulatory insights. To provide reactor design evaluators with an additional tool, I define a Risk-Complexity Score (RCS), which couples simple system complexity quantification with existing risk measures and can support risk-informed system analyses. I then conduct an internet network Quality of Service (QoS) test to demonstrate one of the many important considerations for remote operations stress-testing, which proposes an approach for evaluation within the SAHARO licensing process: the “SAHARO Coping and Minimum Inventory Assessment Strategy.” Lastly, based on my literature and industry reviews, I have constructed a framework that informs reactor designers on how to iterate through the SAHARO-based design process, while also enabling vendor-regulator collaboration and shared learning. Ultimately, I aim to help designers and regulators in the nascent fields of autonomous, automated, and remote NPP operations identify the key questions these technologies and systems must address to ensure safe, effective, and practical application.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralizing Power: Enabling Local Energy Resilience and Equity in Accra</title>
<link href="https://hdl.handle.net/1721.1/162074" rel="alternate"/>
<author>
<name>Kulkarni, Nikita</name>
</author>
<id>https://hdl.handle.net/1721.1/162074</id>
<updated>2025-07-30T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Decentralizing Power: Enabling Local Energy Resilience and Equity in Accra
Kulkarni, Nikita
Over 600 million people in Sub-Saharan Africa lack access to electricity. While Ghana is projected to achieve universal access by 2030, this national milestone obscures lived experiences of energy insecurity— particularly in urban centers like Accra. Despite a reported 91% grid connection rate, only 17% of Accra’s households consider their electricity supply reliable (Afrobarometer, 2022). Traditional, binary metrics— focused solely on grid connection—fail to capture essential social dimensions such as reliability, affordability, equity, and resilience, particularly under intensifying climate and urban pressures. My thesis&#13;
investigates persistent energy insecurity in Accra, Ghana’s capital, through the lens of dumsor—a term used to describe recurring power outages that disrupt daily life and expose the fragility of the centralized&#13;
electricity system. Drawing on the frameworks of splintered urbanism and the techno-politics of infrastructure failure, the thesis explores how dumsor reflects institutional fragmentation, political contestation, and inequality in the energy infrastructure space. In response to dumsor, I examine whether decentralized energy systems, particularly solar, can offer a pathway to local energy resilience—defined here as the place-based capacity to withstand dumsor through cleaner, more affordable alternatives for sustainable and reliable power. The study combines a technical assessment of Accra’s solar potential with a critical analysis of policy frameworks, climate finance mechanisms, and political agendas. Grounded in fieldwork and interviews with stakeholders across the energy value chain—from regulators and municipal actors to utilities, solar providers, financiers, residents, and advocacy groups—my thesis identifies on-the-ground barriers to and opportunities for the energy transition. While distributed solar presents a promising alternative with broad reach, persistent challenges in affordability, coordination, and delivery capacity threaten its scalability. Without targeted policy interventions, there is a risk of reinforcing a new form of energy infrastructure splintering—where only the affluent benefit. My thesis concludes that addressing energy insecurity in Accra requires strategic institutional and policy reforms to reconfigure governance, empower municipalities, and enable inclusive financing and policy at the most local level to enable solar alternatives. Energy decentralization offers a promising path forward, but the thesis underscores the ongoing role of the state as a critical enabler of an energy transition that is sustainable and just.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Vacant to Valuable: Building Community Wealth through &#13;
Brownfield Redevelopment in Legacy Industrial Cities</title>
<link href="https://hdl.handle.net/1721.1/162073" rel="alternate"/>
<author>
<name>Jex, Sara Lynn</name>
</author>
<id>https://hdl.handle.net/1721.1/162073</id>
<updated>2025-07-30T03:07:31Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Vacant to Valuable: Building Community Wealth through &#13;
Brownfield Redevelopment in Legacy Industrial Cities
Jex, Sara Lynn
Recent federal investments in domestic manufacturing have renewed economic interest in legacy industrial cities across the United States. As these places attract new development, it is critical to safeguard against repeating the harms of the 20th-century exodus of industry and manufacturing jobs—when offshoring, suburbanization, and discriminatory housing policies deepened spatialized racial and economic inequalities. How can communities retain the wealth generated by new industrial investments, even if companies leave? This thesis explores how industrial brownfield redevelopment might utilize community wealth-building (CWB) strategies to advance equitable economic development. Focusing on the work of the Site Readiness for Good Jobs Fund in Cleveland, Ohio—a nonprofit preparing long-vacant industrial land for job-dense uses—it examines the potential for mission-driven organizations to use brownfield redevelopment to anchor wealth locally and proactively resist displacement. By analyzing case studies in Buffalo, Milwaukee, Chicago, and Philadelphia, the research tackles three questions: How do mission-driven organizations deliver community benefits through industrial brownfield redevelopments? In what ways do CWB models reshape how capital flows through redevelopment projects? And, what questions and decisions must the Site Readiness Fund consider to build lasting community wealth in Cleveland? Findings suggest that industrial brownfield redevelopment, when paired with strategic partnerships, site control, and a clear vision, offers a unique opportunity to implement CWB models. These strategies can help mission-driven organizations redistribute the risks and rewards of necessary public investments in brownfields and build trust with the community, ensuring that residents surrounding these reactivated sites benefit not just from new jobs, but from ownership and long-term economic power over their futures. The thesis concludes by applying these lessons to the Site Readiness Fund, outlining potential paths forward that embed economic democracy in the redevelopment of Cleveland’s legacy industrial areas.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Burning S(e)oul: A Body for Cremation</title>
<link href="https://hdl.handle.net/1721.1/162072" rel="alternate"/>
<author>
<name>Kwun, Namhi</name>
</author>
<id>https://hdl.handle.net/1721.1/162072</id>
<updated>2025-07-30T03:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Burning S(e)oul: A Body for Cremation
Kwun, Namhi
Every year, there are over 70,000 fatalities around Seoul, with only two operating crematoria in the city, that is over 100 bodies a day each institution needs to process efciently. By May 26, it would have been six years since my grandfather was gone in those fames. Threading the remnants of mourning, Burning S(e)oul, in forms of a short flm, is a dialogue between “absences” of bodies and architecture. It is presented as a triptych along three parallel timelines divided into fve tableaux. Narrating the aftermaths of death, it refects the bereaved, the deceased, and the workers’ perspective along three mandatory days of grieving. Absence in this paradigm is not solely physical or emotional but rather phenomenological— what appears a quotidian existence of oneself is stripped of its corpse, reafrming that the inherent genius loci of the crematorium instead refect a broader infuence that institutions have experienced since post-war Korea. It argues that the systematized practice of death processing is an apparatus used to sever the genealogy of individual bodies from their role in afrming personal and communal kinships. Embedded within its architectural design, this alienation dismantles time by shifting the condition of death processes as an engineered state, rather than historical or material one. This detachment is emblematic of the country’s postwar trajectory, where rapid modernization prioritized efciency over continuity, severing longstanding rituals that once bond personal grief to communal memory. The friction between an engineered present and an inherited past manifest as a form of cultural desynchronization— one where the ostensibly modern remains haunted by the traditional. This shift extends beyond mere technical or practical concerns; it represents a deliberate method of assimilating a nonlinear societal modernization—one that in its pursuit of progress, distances itself from historical trauma. Yet this tension does not merely mark a transition; it accumulates as a generational melancholy, where the urgency of progress leaves grief suspended in an unresolved state, neither fully severed nor meaningfully preserved.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Risks in Voluntary Forest Carbon Offsets Using Open Data: A Hybrid Framework Integrating Retrieval-Augmented Generation in LLMs and Geospatial Analytics</title>
<link href="https://hdl.handle.net/1721.1/162071" rel="alternate"/>
<author>
<name>Xu, Ziqing (Becky)</name>
</author>
<id>https://hdl.handle.net/1721.1/162071</id>
<updated>2025-07-30T03:07:36Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Analyzing Risks in Voluntary Forest Carbon Offsets Using Open Data: A Hybrid Framework Integrating Retrieval-Augmented Generation in LLMs and Geospatial Analytics
Xu, Ziqing (Becky)
The credibility of voluntary carbon markets hinges on the quality of carbon offset projects, particularly in forestry and land-use sectors where claims of additionality and emissions reductions are often disputed. This paper introduces a novel, open-source approach to evaluating carbon offset projects by integrating open datasets, satellite-based remote sensing, and large language models (LLMs). Focusing on additionality and baseline integrity, the study examines existing challenges—including inflated baselines, inconsistent standards, leakage risks, and limited transparency—and proposes a system to automate early-stage project assessment. The platform combines AI-driven document analysis and geospatial data processing to evaluate risk factors such as additionality, leakage, and policy compliance, offering stakeholders an accessible, scalable tool to identify high-integrity carbon credits and mitigate greenwashing. This work aims to enhance transparency, accountability, and trust in the voluntary carbon market through data-driven, user-friendly decision support.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduction of Radiation Produced in Ion Implantation Devices, and Measurement of Some Relevant Cross-Sections</title>
<link href="https://hdl.handle.net/1721.1/162067" rel="alternate"/>
<author>
<name>Zangi, Arthur S.</name>
</author>
<id>https://hdl.handle.net/1721.1/162067</id>
<updated>2025-07-30T03:07:10Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reduction of Radiation Produced in Ion Implantation Devices, and Measurement of Some Relevant Cross-Sections
Zangi, Arthur S.
Ion implantation devices, machines which can very precisely dope semiconductors using beams of accelerated charged particles, have in recent years begun to be used in implanting high energy light ions, with energies greater than 1 MeV. This has caused unprecedented production of neutron and gamma radiation, particularly of neutrons from the ¹³C(alpha,n)¹⁶O reaction, creating an unacceptable radiation hazard. To address this issue, we undertake dose mapping and modeling efforts to create simulation tools in Geant4 which can accurately predict dose rates on the Axcelis VXE LT. &#13;
&#13;
Existing physics tools for modeling nuclear reactions have been shown to produce non-physical results at incident particle energies of 1 - 2 MeV, as these tools are frequently used for modeling reactions which may have energies into the GeV or even TeV range. To address these deficiencies, we construct a new drop-in physics model which uses relativistic kinematic equations to precisely predict the energy and angular distributions of secondary particles produced in Geant4 at low energies. This model relies on accurate cross-section data to describe the reaction; to address gaps in the literature on the two neutron producing reactions of interest to this work, we measure the angular dependent cross-section of the ¹³C(alpha,n)¹⁶O reaction over 7 angles, at the 2.605 and 2.670 MeV resonances, and we measure the total cross-section of the ²⁹Si(alpha,n)³²S reaction at 2.6 and 2.7 MeV.&#13;
&#13;
By implementing the new physics model and adding new cross-section data to the model of the ion implantation device, we are able to produce a high-fidelity simulation of radiation production and transport in ion implantation devices. Using this tool, we then propose solutions to mitigate radiation production within the ion implanter, reducing the radiation hazards of high energy ion implantation devices.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Radiation Effects on Thermal Properties of Advanced Nuclear Materials</title>
<link href="https://hdl.handle.net/1721.1/162066" rel="alternate"/>
<author>
<name>Johnston, Maren</name>
</author>
<id>https://hdl.handle.net/1721.1/162066</id>
<updated>2025-07-30T03:07:40Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Radiation Effects on Thermal Properties of Advanced Nuclear Materials
Johnston, Maren
Understanding the effects of irradiation on critical thermophysical properties is fundamental for the advancement of next-generation nuclear systems operating in high-flux neutron and gamma environments. Zirconium hydride (ZrH) and yttrium hydride (YH) have emerged as promising neutron moderating materials due to their exceptional hydrogen density leading to superior moderating power. Yet, the radiation-induced microstructural evolution and its correlation to macroscopic thermal transport phenomena remain insufficiently characterized.&#13;
&#13;
In this work, ZrH and YH specimens were characterized pre- and post-irradiation via laser flash analysis, high-resolution dilatometry, and differential scanning calorimetry. Comparative analysis revealed that even low-fluence neutron irradiation induced complex defect clusters that degraded thermal diffusivity, while the crystallographic lattice parameters, vibrational energy states (inferred from thermal expansion measurements), and heat capacity exhibited an inconclusive response to radiation damage.&#13;
&#13;
To address limitations in current characterization methods for large-scale, anisotropic composite nuclear materials, we developed an advanced thermal transport measurement facility using infrared photothermal excitation. This platform enables spatially-resolved thermal diffusivity mapping of silicon carbide (SiC) composites—materials with complex three-dimensional fiber arrangements being evaluated for accident-tolerant fuel cladding applications. Complementary Thermal Conductivity Microscopy (TCM) measurements conducted at Idaho National Laboratory provided microscale resolution of constituent thermal properties, establishing a multi-scale characterization approach that bridges microscopic thermal transport mechanisms with bulk composite performance. These findings advance the qualification of advanced nuclear materials, enabling more accurate thermomechanical modeling and performance prediction under the extreme conditions of next-generation reactors.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bragg Coherent Diffraction Imaging of Metal Microcrystals Using a Multipurpose In Situ Cell Design</title>
<link href="https://hdl.handle.net/1721.1/162065" rel="alternate"/>
<author>
<name>Hultquist, Riley J.</name>
</author>
<id>https://hdl.handle.net/1721.1/162065</id>
<updated>2025-07-30T03:07:23Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Bragg Coherent Diffraction Imaging of Metal Microcrystals Using a Multipurpose In Situ Cell Design
Hultquist, Riley J.
Structural materials are a key limiting factor in the safety, longevity, and efficiency of nuclear power plants. Advanced metal alloys show great promise for use in reactor environments, but ensuring their reliability requires a fundamental understanding of their microstructural evolution under extreme conditions. In situ X-ray experiments offer a powerful means to investigate nanoscale defect evolution under reactor-relevant conditions. Bragg coherent diffraction imaging (BCDI), a synchrotron X-ray technique, enables high-resolution 3D imaging of degradation processes. Combined with an experimental electrochemical cell, BCDI is a promising tool for providing insight into the problems facing advanced materials in next-generation reactor designs. In this work, a custom designed electrochemical cell, successfully adapted for use at four beamlines, was developed and used to demonstrate in situ corrosion and hydrogen embrittlement (HE) of nickel (Ni) and copper (Cu) microcrystals. HE experiments confirmed the hydrogen evolution reaction (HER) at Cu surfaces and bulk embrittlement, using a removable silver/silver chloride (Ag/AgCl) electrode to maintain a stable reference potential. The cell’s chemical durability was demonstrated during more than 30 hours of operation, wherein Ni microcrystals were subjected to boric acid (B(OH)3) and lithium hydroxide (LiOH) to simulate the corrosive coolant chemistry of pressurized water reactors (PWRs). BCDI revealed the evolution of phase and dislocations in a Ni microcrystal under these conditions, affirming its power as a nanoscale measurement tool. Furthermore, BCDI provided direct evidence of lattice expansion in Cu in response to cathodic reduction of hydrogen. Additional analysis reveals a selective beam relaxation effect on Ni microcrystals, providing further insight into radiation-material interactions. The findings of this work lay important groundwork for future advanced alloy development utilizing user-friendly in situ experimental cells.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Car-Free Living: Shared Micromobility and Public Transit Interactions in Chicago</title>
<link href="https://hdl.handle.net/1721.1/162064" rel="alternate"/>
<author>
<name>Joyce-Johnson, Seamus C.</name>
</author>
<id>https://hdl.handle.net/1721.1/162064</id>
<updated>2025-07-30T03:06:58Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling Car-Free Living: Shared Micromobility and Public Transit Interactions in Chicago
Joyce-Johnson, Seamus C.
Shared micromobility/bikeshare services and public transit both offer travel alternatives to the automobile in urban areas. While these services might be viewed as competitors in the urban mobility space, this thesis argues that each benefits from the other as part of a “package of options” available to the car-free or car-lite urban resident that together provide a comprehensive replacement for auto-mobility. This work centers on the Chicago mobility context. It compares shared micromobility systems in Chicago, Los Angeles, Austin, Pittsburgh, and Washington, D.C., each of which have varying levels of transit integration, ridership, ownership models, and fares. It finds that transit agency ownership of shared micromobility systems appears not to be a panacea and that truly integrated fares are not present even in agency-owned systems. It also finds that lower fares are present in systems with greater levels of public subsidy, regardless of the ownership model. The second part of the thesis characterizes the specific interactions between Divvy, Chicago’s main scooter- and bikeshare system, and the Chicago Transit Authority (CTA). It tests the suitability of novel data sources, including CCTV footage and CTA farecard transactions, for inferring transfers between the two systems and finds that existing spatiotemporal inference methods do not capture the wide heterogeneity in transfer rates among rail stations. Although Divvy has stations near most CTA rail stations, there is room for improvement in the rapidity of these transfers. Using GIS and open-source routing tools, the thesis finds an average walk time of 2.1 minutes from CTA entrances to the nearest Divvy station and suggests high-priority relocations. The third part of the thesis presents preliminary results from a survey of Chicago-area residents probing their attitudes and behaviors regarding shared micromobility and public transit. The survey results showed some evidence of complementary use between the two modes. The thesis concludes with a set of recommendations for the CTA regarding improvements in its integration with Divvy.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subaltern Spaces in the Ancient City: Cultural Identity, Spatial Memory, and Networks of Meaning in Roman Pompeii</title>
<link href="https://hdl.handle.net/1721.1/162062" rel="alternate"/>
<author>
<name>Dufour, Curtis</name>
</author>
<id>https://hdl.handle.net/1721.1/162062</id>
<updated>2025-07-30T03:07:13Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Subaltern Spaces in the Ancient City: Cultural Identity, Spatial Memory, and Networks of Meaning in Roman Pompeii
Dufour, Curtis
This thesis is about subaltern spaces and identities in the Roman colony of Pompeii—an ancient city notably destroyed and preserved by the eruption of Vesuvius in 79 CE; one that has been widely studied for its preservation of a Roman urban environment that was ‘frozen in time’. The excellent preservation of the site reveals a colonial material record that has long encouraged terminal narratives of Roman acculturation, so-called Romanization, which have devalued the plurality of identities and meanings found in the dispersed spaces and imageries of the ancient city. Rejecting this unilinear narrative of colonization, this thesis instead examines the networks of meaning tied to subaltern spaces, architectures, and imageries of Pompeii under Roman colonial rule. &#13;
&#13;
In doing so, this thesis adopts a middle-range approach to the study of Pompeii’s spaces—giving attention to the distinct elements of the material record while acknowledging their interrelations that form networks of meaning stretching across time, space, and culture. These networks shaped and collated the distinctive spatial and imagistic elements constructed in the city under Roman rule—creating cohesive and legible spaces that recursively engaged with the diverse population of the city. Engaging in a ‘peopling’ of the past—that is, reimagining the lived experiences of subaltern Pompeian residents within the ancient colonial city—this thesis explores how networks of meaning led to the persistence, subsidence, and emergence of subaltern identity spaces within the ancient colonial city—spaces that were erased, appropriated, and peripheralized under Roman colonial rule. &#13;
&#13;
Through a detailed analysis of the networked spaces in the city—employing methodological frameworks from urban planning, social geography, and urban ethnography—this thesis tracks the presence of the proposed networks of meaning attached to subaltern spaces within the spatial and imagistic environment of the Colonia Cornelia Veneria Pompeianorum. In doing so, this thesis finds that the plurality of identity spaces in Pompeii cannot be understood through top-down, unilinear narratives of domination and erasure; rather, they must be apprehended as dynamic social and spatial features wherein subaltern Pompeian identities persisted within the very frameworks intended to marginalize them—producing hybridized spaces, syncretized architectural forms, and alternative discourses of place defined by the networked meanings that made the city legible to the diverse individuals who inhabited it.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Parking to Parcels: The Potential for Microhubs in New York City’s Parking Garages</title>
<link href="https://hdl.handle.net/1721.1/162061" rel="alternate"/>
<author>
<name>Fabris-Green, Sarafina</name>
</author>
<id>https://hdl.handle.net/1721.1/162061</id>
<updated>2025-07-30T03:06:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">From Parking to Parcels: The Potential for Microhubs in New York City’s Parking Garages
Fabris-Green, Sarafina
This thesis employs a site planning and policy perspective to explore how parking garages can serve as last-mile microhubs for e-commerce package deliveries in New York City. During the COVID-19 pandemic, deliveries accelerated, prompting a proliferation of “last-mile facilities,” the destination where parcels go just prior to final delivery. This surge of activity has prompted residents to raise complaints about trucks and vans driving through their neighborhoods and blocking streets or sidewalks when unloading their goods. In response, New York City government has been forced to think more proactively about the freight supply chain and its impact on the urban environment. New York and other cities have begun experimenting with the use of microhubs. Microhubs are small spaces in which packages are unloaded from vans and trucks onto smaller, more sustainable modes such as cargo bikes and handcarts. A commonly identified but understudied location for microhubs is the parking garage. London stands out as a city with this form of hub. This thesis employs three primary research methods—site observations, interviews, and case studies—to argue that parking garages could provide a solution to better utilize dense urban space in dense cities and improve quality of life for residents by reducing the negative impacts of existing last-mile warehouses and delivery vehicles, all while requiring minimal funding. This is shown through an analysis of existing microhub sites in London and how they relate to their urban surroundings. These findings are then applied to two distinct contexts and garage designs in New York City. Finally, the thesis offers site planning criteria that connect land use policy to the design of the facilities and the surrounding public realm through the concept of “planning at the interface.”
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Financing Inclusive Resilience: Beyond the Economics of Infrastructure in Accra, Ghana</title>
<link href="https://hdl.handle.net/1721.1/162060" rel="alternate"/>
<author>
<name>Goyal, Shubhi</name>
</author>
<id>https://hdl.handle.net/1721.1/162060</id>
<updated>2025-07-30T03:07:16Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Financing Inclusive Resilience: Beyond the Economics of Infrastructure in Accra, Ghana
Goyal, Shubhi
Global infrastructure losses from disasters now exceed an estimated US$700–845 billion annually, disproportionality affecting cities in the Global South (CDRI, 2023). Accra, as a rapidly urbanizing coastal city, faces recurring floods, coastal erosion, and rising vulnerabilities that erode development gains and entrench existing socio-economic inequalities. Climate-related disasters alone cost the city US$118 million in annual losses (CDRI, 2023), disproportionately affecting informal settlements. Infrastructure financing remains underfunded: the city needs US$37.9 billion annually to meet infrastructure needs by 2047 (GNIP, 2018), while a US$900 million gap undermines its Climate Action Plan (AMA, 2025). &#13;
&#13;
Despite increased national investment and brewing/blooming/?? global climate finance mechanisms, Accra struggles to attract and equitably deploy resources for inclusive resilience (CPI, 2023).  Projects like the Greater Accra Resilient and Integrated Development (GARID) project expose systemic issues – prioritizing asset protection over community-centered design, with inadequate participation and social co-ownership (GARID PAD, 2019).&#13;
&#13;
This thesis critically examines how infrastructure financing mechanisms in Accra shape the potential to build inclusive resilience. Mapping the city’s financing landscape, it analyzes how institutional, financial, and governance arrangements influence the selection, distribution, and implementation of investments. Using GARID as a case study, the thesis applies a critical justice framework – drawing on distributive justice (who benefits and who bears the costs), procedural justice (who has voice and decision-making power), and epistemic justice (whose knowledge systems are valued in infrastructure planning) (Carolini, 2022) – to evaluate current infrastructure financing practices and explore opportunities to embed these justices in efforts to build resilience. Findings reveal that infrastructure financing decisions are dominated by centralized donor-driven and ministerial priorities, constrained by fiscal austerity, and evaluated through technocratic frameworks that marginalize community participation and local knowledge. &#13;
&#13;
Ultimately, the thesis argues that building inclusive resilience in climate-vulnerable cities like Accra requires transforming infrastructure financing systems to prioritize social inclusion, participatory governance, and knowledge pluralism – alongside, not subordinate to, economic efficiency.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Loop: Climate-Driven Urbanism for America's Climate Migration Hubs</title>
<link href="https://hdl.handle.net/1721.1/162059" rel="alternate"/>
<author>
<name>Wagner, Cale</name>
</author>
<id>https://hdl.handle.net/1721.1/162059</id>
<updated>2025-07-30T03:07:03Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Breaking the Loop: Climate-Driven Urbanism for America's Climate Migration Hubs
Wagner, Cale
As sea level rise and other climate impacts force millions across the U.S. to increasingly relocate in coming decades, how receiving cities accommodate this growth will significantly impact future emissions trajectories. This thesis examines the climate migration feedback loop, where climate migrants relocate to urban areas with carbon-intensive development patterns, inadvertently accelerating the climate change driving their displacement.&#13;
&#13;
Through analysis of three contrasting metropolitan areas—Atlanta, Portland, and Buffalo—this research demonstrates how different development approaches could either perpetuate or disrupt this feedback loop. Using a spatial methodology based on the urban transect model, the study compares Business-as-Usual scenarios that follow current development trends with Climate-Driven Reform scenarios that redirect growth toward transit-accessible, walkable locations.&#13;
&#13;
The research reveals that Climate-Driven Urbanism can meaningfully reduce both land consumption and emissions compared to conventional development patterns. These reductions stem not from technological advancement or behavioral change, but from strategic spatial reorganization of the same migrating population, with each metropolitan area demonstrating unique implementation pathways. By connecting regional migration flows to metropolitan development scenarios and neighborhood design interventions, this thesis offers planners, designers, and communities a framework for evaluating alternative futures that transform population growth from a spatial challenge and emissions liability into a catalyst for sustainable urbanism.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining the Role of City Owned Assets as Multifunctional Infrastructure: Serving Community Needs Through Collaboration</title>
<link href="https://hdl.handle.net/1721.1/162058" rel="alternate"/>
<author>
<name>Smith, Alessandra</name>
</author>
<id>https://hdl.handle.net/1721.1/162058</id>
<updated>2025-07-30T03:06:45Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Reimagining the Role of City Owned Assets as Multifunctional Infrastructure: Serving Community Needs Through Collaboration
Smith, Alessandra
This thesis investigates how city governments can reconceptualize infrastructure to reshape value creation for communities, using the City of Atlanta as a case study. By examining various departments and executive offices within Atlanta’s municipal structure, the research highlights the complexities of urban governance, where value is not uniformly defined or understood even within a single city. The central question guiding this work is: How can Atlanta’s city agencies collaborate across departments to identify opportunities to create more value through city-owned assets?&#13;
&#13;
Through stakeholder interviews and a mapping of publicly owned assets, this thesis explores an alternative, strategic approach to infrastructure one that supports not only urban planners but also city practitioners seeking to enhance residents’ quality of life through a value-based lens. The study also acknowledges the often overlooked, expanded value of built assets, which remains difficult to capture through conventional metrics. In doing so, it argues for a broader, more inclusive understanding of infrastructure’s role in urban life.&#13;
&#13;
This research offers a framework to view and explore infrastructure and values in a more comprehensive and holistic way compared to traditional methods. The framework centers strategy around prioritizing infrastructure planning, its relative outcomes, the spatial relationships and function of infrastructure, and the relationships that influence how people interact with infrastructure from a value-based lens.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Whose Bronx?” Regime Politics and the Evolution of Community Power at the Kingsbridge Armory</title>
<link href="https://hdl.handle.net/1721.1/162057" rel="alternate"/>
<author>
<name>Phillips, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/162057</id>
<updated>2025-07-30T03:07:22Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">“Whose Bronx?” Regime Politics and the Evolution of Community Power at the Kingsbridge Armory
Phillips, Natalie
This thesis traces the 30-year history of redevelopment activities at the Kingsbridge Armory in the Northwest Bronx, as community groups have mounted an expanding challenge to development-as-usual in New York City. Using urban regime theory as a lens, I deploy archival research and interviews to assess the tensions that emerge when regime politics collide with a building movement of community power at the Kingsbridge Armory over time. I argue that New York City’s predominant urban economic development regime is not structured to accommodate an organization that is both a grassroots leader and a developer, and that as community power continues to evolve, the regime’s traditional arrangements become increasingly untenable. I ultimately assert that the increasingly structural movement of community power at the Kingsbridge Armory requires a reimagining of the informal processes, logics, and roles that have defined New York economic development.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>China Dispossession Watch: Making Visible the Human Costs of Forced Land Expropriation in Urbanizing China</title>
<link href="https://hdl.handle.net/1721.1/162056" rel="alternate"/>
<author>
<name>Wu, Franny Xi</name>
</author>
<id>https://hdl.handle.net/1721.1/162056</id>
<updated>2025-07-30T03:07:07Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">China Dispossession Watch: Making Visible the Human Costs of Forced Land Expropriation in Urbanizing China
Wu, Franny Xi
This thesis critically examines China's land expropriation regime through a mixed-methods approach that integrates ethnographic investigation, quantitative economic analysis, and practical interventions developed in collaboration with affected communities. Drawing on extensive fieldwork in the Yangtze Delta Region, including 50 in-depth interviews with dispossessed residents, the research documents how China's urbanization strategy systematically captures land value through a dispossession machinery operating at the intersection of state power, market mechanisms, and contested citizenship. The ethnographies reveal a sophisticated system of dispossession enabled by a network of actors whose complementary roles maintain procedural appearances while facilitating extralegal tactics. Quantitative analysis demonstrates systemic under-compensation and value capture that leaves dispossessed households with livelihood disruption and housing insecurity. The research examines how affected communities navigate severe constraints through adaptive resistance strategies to overcome power asymmetries and institutional manipulation, and documents their economic, social, and health outcomes. Moving beyond analysis to practice, the thesis introduces two pragmatic interventions developed through collaborative design with affected communities: a digital humanities platform hosting multimedia ethnographic archives and a quantitative data dashboard; and an anti-displacement handbook which operationalizes research findings into actionable guidance calibrated to the specific challenges identified by community partners. These practical outputs, established as the China Dispossession Watch social venture, reflect a theory of change focused on addressing information asymmetries while building horizontal knowledge networks and long-term movement capacity.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An evaluation of the main harbors of Puerto Rico as to their potential for the location of port industries, with special reference to Jobos Harbor</title>
<link href="https://hdl.handle.net/1721.1/161774" rel="alternate"/>
<author>
<name>Martinez-Sandin, Owen.</name>
</author>
<id>https://hdl.handle.net/1721.1/161774</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1960-01-01T00:00:00Z</published>
<summary type="text">An evaluation of the main harbors of Puerto Rico as to their potential for the location of port industries, with special reference to Jobos Harbor
Martinez-Sandin, Owen.
Thesis: M.C.P., Massachusetts Institute of Technology, Department of City and Regional Planning, 1960; Includes bibliographical references.
</summary>
<dc:date>1960-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Theoretical electron work functions of film coated metals</title>
<link href="https://hdl.handle.net/1721.1/161773" rel="alternate"/>
<author>
<name>Levine, Jules David.</name>
</author>
<id>https://hdl.handle.net/1721.1/161773</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">Theoretical electron work functions of film coated metals
Levine, Jules David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1961; Includes bibliographical references (leaves 47-48).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat transfer from immersion heaters to boiling liquids</title>
<link href="https://hdl.handle.net/1721.1/161772" rel="alternate"/>
<author>
<name>Simpson, H. C.</name>
</author>
<id>https://hdl.handle.net/1721.1/161772</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">Heat transfer from immersion heaters to boiling liquids
Simpson, H. C.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1951; Includes bibliographical references (leaves 161-163).
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of degradation rate and crosslink density of artificial skin on wound contraction</title>
<link href="https://hdl.handle.net/1721.1/161761" rel="alternate"/>
<author>
<name>Lee, Elaine.</name>
</author>
<id>https://hdl.handle.net/1721.1/161761</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Effects of degradation rate and crosslink density of artificial skin on wound contraction
Lee, Elaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1986; Bibliography: leaves 93-94.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Continual Learning for Engineering: Benchmarking and Exploring Strategies for 3D Engineering Problems</title>
<link href="https://hdl.handle.net/1721.1/159944" rel="alternate"/>
<author>
<name>Samuel, Kaira M.</name>
</author>
<id>https://hdl.handle.net/1721.1/159944</id>
<updated>2025-07-08T03:06:49Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Continual Learning for Engineering: Benchmarking and Exploring Strategies for 3D Engineering Problems
Samuel, Kaira M.
Engineering applications of machine learning often involve high-dimensional, computationally intensive simulations paired with limited and evolving datasets. As new designs and constraints emerge, models must adapt to incoming data without frequent retraining, which is often infeasible due to the cost of generating engineering data. Continual learning (CL) offers a promising alternative by enabling models to incrementally learn from sequential data while mitigating catastrophic forgetting, in which there is a loss of performance on previously seen examples. This thesis investigates the application of continual learning to regression-based engineering tasks, with an emphasis on surrogate modeling. We begin by benchmarking several foundational CL strategies, including regularization-based and rehearsal-based methods, across five diverse engineering datasets. To support this analysis, we construct nine new regression-focused continual learning benchmarks designed to reflect practical engineering scenarios. Results show that Experience Replay, a simple rehearsal method, consistently achieves strong performance, approaching "joint training" performance baseline of retraining from scratch, while substantially reducing computational cost. To further explore how rehearsal strategies can be made more efficient and effective, we propose two adaptive replay methods that prioritize memory samples based on forgetting dynamics. These methods extend previous adaptive replay strategies by using input clustering and representations from TabPFN, a foundation model for tabular data, to guide more informed sample selection without knowledge of experience boundaries. We evaluate their performance on both complex engineering datasets and controlled synthetic tasks. In scenarios where forgetting is unevenly distributed, the adaptive methods offer clear advantages, highlighting the potential for more intelligent replay under constrained resources. This work positions continual learning as a practical and effective strategy for handling dynamic engineering datasets, and offers new insights into how adaptive replay can enhance efficiency in data-limited, high-cost learning environments.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Order and Wavelet-Adaptive Immersed Methods for&#13;
PDEs on Complex Domain Geometries</title>
<link href="https://hdl.handle.net/1721.1/159943" rel="alternate"/>
<author>
<name>Shen, Changxiao Nigel</name>
</author>
<id>https://hdl.handle.net/1721.1/159943</id>
<updated>2025-07-08T03:06:47Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">High-Order and Wavelet-Adaptive Immersed Methods for&#13;
PDEs on Complex Domain Geometries
Shen, Changxiao Nigel
The development of immersed methods brings a promising solution to the numerical simulation of interface-coupled multi-physics problems, such as multi-phase flows and fluidstructure interactions. This renders necessitates the design of novel high-order and efficient solvers based on immersed methods. This thesis examines two pivotal aspects of these methods: firstly, the acceleration of computational processes via adaptive resolution strategies; and secondly, the enhancement of accuracy order while sustaining numerical stability. To achieve the former, we develop a novel wavelet transform algorithm applicable to computational domains with arbitrary geometries. This wavelet transform maintains the order of the wavelet and serves as an indicator for local truncation error (LTE), resulting in an adaptive resolution strategy with explicit error control. To address the latter, we introduce a fifth-order upwind finite difference (FD) scheme that sustains numerical stability across any immersed interface discretization.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Constrained and High-dimensional Bayesian Optimization with Transformers</title>
<link href="https://hdl.handle.net/1721.1/159942" rel="alternate"/>
<author>
<name>Yu, Rosen Ting-Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/159942</id>
<updated>2025-07-08T03:06:46Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Constrained and High-dimensional Bayesian Optimization with Transformers
Yu, Rosen Ting-Ying
This thesis advances Bayesian Optimization (BO) methodology through two novel algorithms that address critical limitations in handling constraints and high-dimensional spaces. First, we introduce a constraint-handling framework leveraging Prior-data Fitted Networks (PFNs), a foundation transformer model that evaluates objectives and constraints simultaneously in a single forward pass through in-context learning. This approach demonstrates an order of magnitude speedup while maintaining or improving solution quality across 15 test problems spanning synthetic, structural, and engineering design challenges. Second, we propose Gradient-Informed Bayesian Optimization using Tabular Foundation Models (GITBO), which utilizes pre-trained tabular foundation models as surrogates for high-dimensional optimization (exceeding 100 dimensions). By exploiting internal gradient computations to identify sensitive optimization directions, GIT-BO creates continuously re-estimated active subspaces without model retraining. Empirical evaluation across 23 benchmarks demonstrates GIT-BO’s superior performance compared to state-of-the-art Gaussian Process-based methods, particularly as dimensionality increases to 500 dimensions. Together, these approaches establish foundation models as powerful alternatives to Gaussian Process methods for constrained and high-dimensional Bayesian optimization challenges.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Starting Material-Oriented Strategies in Computer-Aided Synthesis Planning With a Bidirectional Search Algorithm</title>
<link href="https://hdl.handle.net/1721.1/159923" rel="alternate"/>
<author>
<name>Yu, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/159923</id>
<updated>2025-07-08T03:06:56Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Enabling Starting Material-Oriented Strategies in Computer-Aided Synthesis Planning With a Bidirectional Search Algorithm
Yu, Kevin
Retrosynthesis, in which one proposes a reaction pathway towards a target molecule from simpler starting materials, is a fundamental task in synthetic chemistry. Current computational search methods assume the sufficiency of reaching arbitrary building blocks but fail to address the common real-world constraint where the use of specific starting materials is desirable. To this end, this thesis reformulates computer-aided retrosynthesis as a starting material-constrained problem, in which one or more starting materials are given as input in addition to the target structure. Under this formulation, we are able to apply novel strategies to more efficiently navigate the combinatorial explosion of reactions to consider during synthesis planning. First, we demonstrate how training on multi-step synthesis routes inferred from a reaction base allows a neural network to predict the number of steps needed to synthesize targets from other specified building blocks. Using this learned value function in combination with recent advances in bottom-up synthesis planning, this thesis proposes a novel bidirectional CASP algorithm, DESP (Double-Ended Synthesis Planning). We demonstrate the utility of DESP through a number of empirical benchmarks and case studies.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Putting Lipstick on a PIG: Modeling Pine Island Glacier (PIG) Shear Margin Collapse with Compressive Arch Failure and Observations</title>
<link href="https://hdl.handle.net/1721.1/159919" rel="alternate"/>
<author>
<name>Wells-Moran, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/159919</id>
<updated>2025-07-08T03:06:55Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Putting Lipstick on a PIG: Modeling Pine Island Glacier (PIG) Shear Margin Collapse with Compressive Arch Failure and Observations
Wells-Moran, Sarah
Pine Island Glacier (PIG) drains 10\% of the West Antarctic Ice Sheet and has undergone rapid change in the observational record, contributing to uncertainty in sea level rise projections. The Pine Island Ice Shelf (PIIS), which provides a key buttressing force that slows the flux of ice across the grounding line, has accelerated 800 m/yr (an approximate 20\% increase in speed) between 2015 and 2024, accompanied by a visible increase in damage in the Southern shear margin, indicating a partial loss of buttressing. We examine this loss of buttressing to determine the mechanisms through which ice shelves collapse. Buttressing allows an ice shelf to increase in thickness to a point at which the stresses within the ice would exceed the tensile yield strength without the compression provided by buttressing. Following the Compressive Arch Theory proposed by \textcite{doake_breakup_1998}, we hypothesize that when a calving event decouples the ice shelf from a buttressing region, the thicker ice shelf is thrown into tension and rapidly collapses, as happened with the Larsen B Ice Shelf in 2002. We use the Ice-sheet and Sea-level System Model to investigate the instantaneous stress response to loss in buttressing on an idealized glacier, with the goal of finding the changes in shear margin buttressing that most accurately recreate observed changes. In our model, we are only able to replicate observed changes in stress regime by decoupling both shear margins, suggesting the PIIS is currently providing negligible buttressing, allowing PIG to accelerate, thin, and retreat. We construct a timeline of shear margin evolution and collapse over the PIIS from 2015 to 2024 using model outputs of stress field response to changes in buttressing, coupled with observed changes in velocity, effective and principal strain rates, and calving events. Despite losing buttressing from both shear margins, the PIIS is still intact, contrary to our initial hypothesis on compressive arch failure. We re-frame Compressive Arch Theory to better capture the timescales involved in loss of buttressing. We posit that compressive arch failure from loss of buttressing on short time scales leads to rapid ice shelf disintegration, whereas compressive arch failure occurring on longer time scales allows the ice to viscously relax, leading to ice shelf thinning instead of collapse. This new framework for investigating loss of buttressing allows us to better assess the stability of ice shelves and more accurately model future Antarctic contributions to sea level rise.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration of Zip-formwork and conventional formwork systems for shape-optimized concrete in large scale construction</title>
<link href="https://hdl.handle.net/1721.1/159918" rel="alternate"/>
<author>
<name>Zhuang, Yingjia</name>
</author>
<id>https://hdl.handle.net/1721.1/159918</id>
<updated>2025-07-08T03:06:53Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Integration of Zip-formwork and conventional formwork systems for shape-optimized concrete in large scale construction
Zhuang, Yingjia
Cast-in-place concrete production plays a dominant role in the architecture, engineering and construction (AEC) industry, particularly in large-scale projects, contributing significantly to global material consumption, construction costs, and embodied carbon emissions. Shape optimized concrete has been developed as a solution for more affordable and sustainable construction using less material to create efficient structures that meet structural demands. Although extensive research and development has focused on applying shape optimization to prismatic concrete beams, these beams are often limited by the constraints of available formwork and are primarily designed as pre-cast components. This paper presents the results of optimizing the Zip-Form, a digitally fabricated formwork system made from mild steel, designed for forming shape-optimized concrete beams, and its integration with conventional formwork equipment. The study evaluates the structural performance, embodied carbon, and cost of the Zip-Form integrated system in comparison to a traditional formwork platform used for prismatic beams. The findings highlight the Zip-Form’s potential for forming shape-optimized concrete beams using cast-in-place methods, making it a viable solution for sustainable large-scale construction projects in the current industry. The methodology outlined in this thesis provides a comprehensive design process, beginning with the structural design of the shape-optimized&#13;
concrete beams, followed by the design of the Zip-Form integrated formwork system to cast the beams, and concluding with an embodied carbon and cost analysis to evaluate the environmental and financial benefits. This thesis aims to bridge academic research and innovation with practical, real-world applications.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>First Visible Wavelength Lightcurves for the Northern Hemispheres of Titania and Oberon</title>
<link href="https://hdl.handle.net/1721.1/159899" rel="alternate"/>
<author>
<name>Colclasure, Abigail M.</name>
</author>
<id>https://hdl.handle.net/1721.1/159899</id>
<updated>2025-07-08T03:06:52Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">First Visible Wavelength Lightcurves for the Northern Hemispheres of Titania and Oberon
Colclasure, Abigail M.
The most recently published lightcurves of the large Uranian satellites were published in 1989 and there have been no published lightcurves of the satellites’ northern hemispheres. In this work, I present the first visible-wavelength lightcurves of the northern hemispheres of Titania and Oberon. Observations of the Uranian satellites are inherently difficult given their proximity to Uranus. Contamination from stray Uranian light is a major challenge and the background near the satellites must be well characterized. I mitigated the effects of stray Uranian light using point spread function photometry. I modelled Uranus with a Lorentzian with the same full width at half max as the stellar point spread function. I also determined that Uranus’s profile is poorly modeled with a Gaussian or with the stellar empirical point spread function. After accounting for Uranian light in this way, there remains significant correlation between the photometric measurements of Titania and Oberon. I considered what may be causing this correlation and suggest several paths forward.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Traversing Rugged Domains: Explorations in Non-convex&#13;
Optimization Theory and Software</title>
<link href="https://hdl.handle.net/1721.1/159895" rel="alternate"/>
<author>
<name>Dixit, Vaibhav Kumar</name>
</author>
<id>https://hdl.handle.net/1721.1/159895</id>
<updated>2025-07-08T03:06:50Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Traversing Rugged Domains: Explorations in Non-convex&#13;
Optimization Theory and Software
Dixit, Vaibhav Kumar
This thesis introduces theoretical and computational frameworks for nonlinear, nonconvex optimization problems in statistics, machine learning, and optimal control. Disciplined Geodesically Convex Programming (DGCP) extends convexity verification to Riemannian manifolds, enabling optimization on curved spaces with global optimality guarantees. We develop rules and atoms for Cartan-Hadamard manifolds, particularly symmetric positive definite matrices, transforming non-convex problems into tractable ones through Riemannian geometry. We also present Optimization.jl, a unified interface for diverse optimization methods that supports specialized implementations for specific problem classes. Its modular architecture integrates automatic differentiation with an extensible plugin system. The framework’s capabilities are demonstrated through a GPU-accelerated hybrid method combining Particle Swarm Optimization with L-BFGS, and an augmented Lagrangian approach with stochastic inner optimizers that connects constrained optimization with machine learning techniques. Our work combines theoretical foundations with practical implementation, providing researchers tools to use advanced optimization methods without specialized mathematical knowledge.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accurate Protein Function Prediction with Graph Transformer-Based Function Localization</title>
<link href="https://hdl.handle.net/1721.1/159880" rel="alternate"/>
<author>
<name>Mitra, Shania</name>
</author>
<id>https://hdl.handle.net/1721.1/159880</id>
<updated>2025-07-08T03:06:48Z</updated>
<published>2025-05-01T00:00:00Z</published>
<summary type="text">Accurate Protein Function Prediction with Graph Transformer-Based Function Localization
Mitra, Shania
Protein function prediction is a fundamental challenge in biology, crucial for understanding biological processes, disease mechanisms, and accelerating drug discovery. While computational methods leveraging sequence or structural information have advanced, accurately translating protein structure to function and pinpointing the specific residues responsible remain significant hurdles. Many existing deep learning approaches fall short, often relying on post-hoc analyses that lack specificity or fail to directly integrate functional site identification into the prediction process. In this study, we introduce the Protein Region Proposal Network (ProteinRPN), a novel graphbased deep learning framework designed to address these limitations. ProteinRPN is the first model to integrate the proactive identification of functional regions within the Gene Ontology term prediction pipeline. The core of the model is a Region Proposal Network module that processes protein structure graphs (residues as nodes, contacts as edges) to identify potential functional regions, termed anchors. These anchors are subsequently refined using a multi-stage process involving a novel differentiable node drop pooling layer that incorporates domain knowledge. A functional attention layer further enhances the representations of predicted functional nodes, and a Graph Multiset Transformer aggregates this localized information into a comprehensive graph-level embedding for final prediction. The model is optimized using a combination of cross-entropy classification loss, supervised and self-supervised contrastive learning losses (SupCon and InfoNCE) for robust representation learning. Evaluated on standard benchmarks derived from the DeepFRI/HEAL datasets, ProteinRPN demonstrates state-of-the-art performance, consistently outperforming existing sequencebased and structure-based methods across all three Gene Ontology domains (Molecular Function, Biological Process, Cellular Component) based on standard CAFA metrics (Fmax, AUPR, Smin). Notably, ProteinRPN achieves significant improvements over strong baselines like HEAL, with AUPR (Area under Precision Recall curve) gains of approximately 15.4% (BP), 8.5% (CC), and 1.3% (MF). Furthermore, ablation studies validate the contribution of each key component, particularly the region proposal mechanism. Qualitative analysis confirms the model’s ability to accurately localize known functional residues within protein structures, offering enhanced interpretability. By directly modeling and identifying functionally relevant structural regions, ProteinRPN presents a robust, interpretable, and high-performing approach to structure-based protein function prediction. This work contributes a novel framework that bridges the gap between structural information and functional annotation, offering potential for deeper biological insights and advancing computational tools for understanding the proteome.
</summary>
<dc:date>2025-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Axial vibration of steam turbine buckets</title>
<link href="https://hdl.handle.net/1721.1/159852" rel="alternate"/>
<author>
<name>Ewert, Richard H.</name>
</author>
<id>https://hdl.handle.net/1721.1/159852</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1938-01-01T00:00:00Z</published>
<summary type="text">Axial vibration of steam turbine buckets
Ewert, Richard H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1938; Includes bibliographical references (leaf 56).
</summary>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The deflection of steam turbine diaphragms</title>
<link href="https://hdl.handle.net/1721.1/159851" rel="alternate"/>
<author>
<name>Prohl, Melvin Albert.</name>
</author>
<id>https://hdl.handle.net/1721.1/159851</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1938-01-01T00:00:00Z</published>
<summary type="text">The deflection of steam turbine diaphragms
Prohl, Melvin Albert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1938
</summary>
<dc:date>1938-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new railway labor plan</title>
<link href="https://hdl.handle.net/1721.1/159850" rel="alternate"/>
<author>
<name>Gilman, Jonathan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/159850</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">A new railway labor plan
Gilman, Jonathan C.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1963; Includes bibliographical references (leaves 119-121).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transient analysis of marine steam turbine, propeller and ship dynamics.</title>
<link href="https://hdl.handle.net/1721.1/159848" rel="alternate"/>
<author>
<name>Stang Lund, Emil.</name>
</author>
<id>https://hdl.handle.net/1721.1/159848</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1965-01-01T00:00:00Z</published>
<summary type="text">Transient analysis of marine steam turbine, propeller and ship dynamics.
Stang Lund, Emil.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1965; Bibliography: leaves 79-91.
</summary>
<dc:date>1965-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strengthening Value Chains for Developing and DeployingBatteries in the Global South</title>
<link href="https://hdl.handle.net/1721.1/159832" rel="alternate"/>
<author>
<name>Munjal, Mrigi</name>
</author>
<id>https://hdl.handle.net/1721.1/159832</id>
<updated>2025-07-01T03:05:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Strengthening Value Chains for Developing and DeployingBatteries in the Global South
Munjal, Mrigi
This thesis presents an integrated assessment of the elements required to strengthen the battery industry in emerging markets. It articulates a synergistic approach to fostering resilient battery value chains that are critical for the sustainable energy transition in the Global South. The first part touches upon building a more diversified and secure raw material base is essential for robust battery value chains in developing economies. It establishes the groundwork by proposing a potential pathway to diversify the global lithium supply chain by examining the potential of lithium mining in Arkansas through stakeholder analysis and policy recommendations. The second part underscores the importance of technology adaptation and process innovation in developing cost-effective battery chemistries suitable for the distinct conditions of the Global South. This part of the thesis addresses the technological challenges in scaling up battery production, focusing on sodium-ion batteries (SIBs) as a promising alternative to lithium-ion systems. Through an innovative application of natural language processing, this analysis distills the vast landscape of SIB research to identify scalable solutions for electrode design and manufacturing. The final part of the thesis converges on the deployment aspect of batteries, scrutinizing the role of Battery Energy Storage Systems (BESS) in three distinct emerging markets: India, South Africa, and Malawi. It offers a granular perspective on the application of BESS within varied energy landscapes, advocating for the customization of storage solutions to local market realities. This illuminates the transformative potential of BESS for enhancing grid stability and enabling renewable energy integration, thereby empowering the Global South to leapfrog to a resilient and green energy paradigm. This thesis coalesces into a comprehensive framework that underscores the multifaceted aspects of value chain enhancement—from mineral sourcing and battery chemistry innovation to end-use applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A computer program for assessing the economics of hie[r]archical systems : an application in engineering project evaluation</title>
<link href="https://hdl.handle.net/1721.1/159437" rel="alternate"/>
<author>
<name>Hagen, Arnulf.</name>
</author>
<id>https://hdl.handle.net/1721.1/159437</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1990-01-01T00:00:00Z</published>
<summary type="text">A computer program for assessing the economics of hie[r]archical systems : an application in engineering project evaluation
Hagen, Arnulf.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1990; Title as it appears in the M.I.T. Graduate List, Feb. 1990: A computer model for assessing the economics of hierarchical systems.; Includes bibliographical references (leaves [73]-85).
</summary>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BIOSENTERO: Bioinspired Soft Enteroscopic Robot for Facilitating Locomotion, Steering, and Intervention in the Deep Small Intestine</title>
<link href="https://hdl.handle.net/1721.1/159373" rel="alternate"/>
<author>
<name>Jebran, Ahmad Mujtaba</name>
</author>
<id>https://hdl.handle.net/1721.1/159373</id>
<updated>2025-07-09T03:14:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">BIOSENTERO: Bioinspired Soft Enteroscopic Robot for Facilitating Locomotion, Steering, and Intervention in the Deep Small Intestine
Jebran, Ahmad Mujtaba
Diagnosing and treating small intestinal disorders such as bleeding, inflammatory bowel disease, and tumors pose significant challenges due to limitations in accessing this anatomical compartment. To address these challenges, we develop BIOSENTERO, a bioinspired soft enteroscopic robot, to facilitate deep small intestine procedures, which addresses challenges associated with locomotion, steering, and intervention faced by existing soft robotic systems. BIOSENTERO features a hollow-cylinder design consisting of a linearly deformable soft pneumatic actuator as the robotic body, two radially expandable soft pneumatic actuators wrapped with Kirigami sleeves as the robotic head and tail units, a central hollow channel for housing accessory endoscopic tools, and a control box and joystick for navigation. The robot's body is a fiber-reinforced actuator with four inflatable chambers, enabling versatile movements, including axial expansion and contraction and bending over 90 degrees for 360-degree planar access. The dynamic Kirigami sleeve design achieves clinically acceptable friction force on intestinal mucosa with radial expansion, while minimizing tissue distention. A reinforced central channel supports the passage of tools to facilitate diagnostic and therapeutic interventions. A control box supports efficient locomotion and steering, achieving autonomous speeds of ~100 mm/min in vitro and ~43 mm/min in ex vivo intestinal tissue, and an assisted speed of ~200 mm/min in pig studies, without overdistention. Through in vivo pig studies, we demonstrated BIOSENTERO's potential for tissue biopsies, localized drug delivery, and real-time visualization in the deep intestinal region, without causing tissue overdistention and damage.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Wireless Delamination Sensor</title>
<link href="https://hdl.handle.net/1721.1/159372" rel="alternate"/>
<author>
<name>Ghosh, Aniruddha</name>
</author>
<id>https://hdl.handle.net/1721.1/159372</id>
<updated>2025-07-09T03:13:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Structural Wireless Delamination Sensor
Ghosh, Aniruddha
Composite materials, particularly laminated fibre reinforced polymer composites, have gained widespread acceptance in various industries due to their superior strength-to-weight ratio and corrosion resistance. The phenomenon of cracking between plies/laminae of such a layered composite is commonly referred to as delamination and occurs due to various reasons, such as corrosion and fatigue of the structure. Structural integrity can be enhanced by monitoring delamination and, ideally, to have a sensor that can continuously monitor the delamination extent. The delamination sensor proposed in this thesis (termed as Wireless Interlaminar Nano Sensor, or WINS) is an LC resonant circuit (resonant frequency fₛ = 1 MHz), and unlike prior sensors, is comprised solely of structural materials: a structural epoxy and carbon nanotubes (CNTs). The delamination crack causes a change in capacitance of the sensor leading to a change in its resonant frequency. The wireless sensor operation was demonstrated using an LC resonant circuit implemented on a printed circuit board, which is termed the sensor emulator (SE). A wireless sensing circuit and reader provided by Analog Devices Inc. was used for the initial measurements using the SE and later, the proof-of-concept (PoC WINS) devices. The PoC WINS device is a CNT-polymer nanocomposite based parallel plate capacitor, adhesively bonded between two composite laminates, and connected in parallel to the capacitor of the SE. The PoC WINS device was subjected to loading in the Mode-I configuration to induce delamination crack growth. The quality factor Q of the SE was varied (Q = 18, 3.2, 1.6, 0.8) by adding different external resistors, and a signal was acquired wirelessly for each value of Q as the delamination crack propagated. The wirelessly acquired signal was also sampled (sampling frequency Fₛ = 100 MHz) and analyzed to estimate the resonant frequency of the sensor. The effect of low sampling frequency was studied by downsampling of the acquired signal by a factor of 100. When Q was large (Q= 18), a change of∼2 kHz in the resonant frequency could be detected, corresponding to a change in capacitance of∼100 pF. At smaller values of Q∼1, challenges encountered in wireless signal acquisition were the too-rapid decay of the sensor signal and low signal-to-noise ratio (SNR). A wireless sensing circuit was designed and developed to enable signal acquisition at Q ≤1. The SE was used in the feedback system of a modified Armstrong oscillator (MAO) to obtain a sinusoidal signal of constant amplitude (∼1 V, SNR∼100 dB) even at Q = 0.8. The frequency (f_AO) of the signal wirelessly acquired from MAO is a non-linear function of the capacitance and the quality factor Q of the sensor and was observed to be in the range of 2 MHz. The MAO was tested for its performance using PoC WINS devices. It was observed that capturing the output signal for a duration of∼100 µs was sufficient for the accurate estimation of frequency (standard deviation∼3 Hz). At Q = 0.8 of the sensor, the MAO was able to detect a change in capacitance of 100 pF. To enable the use of low sampling rate (Fₛ = 1 MHz) for wireless signal acquisition, enhance the sensitivity of detecting change in capacitance, and provide direct readout of the change in capacitance of the sensor, the MAO was made part of another circuit termed MAO+. In the MAO+, mixer and filter circuits were used to modulate fAO from∼2 MHz to∼180 kHz and then to∼25 kHz, allowing the use of sampling frequency as low as 50 kHz to estimate the frequency. A phase-locked loop was made part of MAO+ which enabled direct readout of the change in capacitance of the sensor through a 4 1⁄2    digit digital display. The MAO+ was independently tested using PoC WINS devices and was able to detect a change in capacitance (at Q= 0.8 of the SE) of∼10 pF, corresponding to∼200 microns crack advance. This thesis presents the design, implementation, and operation of a wireless sensing circuit that allows signal acquisition at a low quality factor (Q ≤1) without compromising the SNR, demonstrating the first practical (wireless, made out of structural materials) delamination sensor for advanced composites.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Efficiency Soft-Switched Pulsed Plasma Bias Supply System</title>
<link href="https://hdl.handle.net/1721.1/159371" rel="alternate"/>
<author>
<name>Estrin, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/159371</id>
<updated>2025-07-09T03:14:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">High-Efficiency Soft-Switched Pulsed Plasma Bias Supply System
Estrin, Julia
Radio Frequency (RF) generators play a crucial role as bias voltage sources in plasma-enhanced semiconductor manufacturing processes. Employing pulsed waveforms to generate plasma offers significant improvements in manufacturing precision. However, producing these waveforms is challenging due to the need for high voltages (kilovolt range), high frequencies (hundreds of kilohertz to low megahertz), precise timing, and broadband frequency content. Traditional methods to generate these waveforms are limited by semiconductor voltage ratings, leading to either low-voltage waveforms or complex circuits to achieve higher pulse voltages. This work presents a simple, compact, and efficient method for generating a pulsed bias voltage for plasma processing. The approach involves synthesizing the pulsed waveform at a low, convenient voltage and then using a transformer to step up the voltage to the desired level. A low-leakage inductance coaxial cable-based transformer is developed to provide scaling with sufficient fidelity across a wide frequency range. Zero voltage switching (ZVS) is achieved on all devices, ensuring highly efficient operation. The proposed system is validated through a lab bench prototype that generates pulses of 2.1 kV at a frequency of 400 kHz. Additionally, this system allows for adjustments in pulse duty ratio and slew rate, offering enhanced control and versatility for various applications.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Evaluation of a Potentially Wearable Device for Circulating Cell Monitoring</title>
<link href="https://hdl.handle.net/1721.1/159370" rel="alternate"/>
<author>
<name>Jang, Kyuho</name>
</author>
<id>https://hdl.handle.net/1721.1/159370</id>
<updated>2025-07-09T03:13:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development and Evaluation of a Potentially Wearable Device for Circulating Cell Monitoring
Jang, Kyuho
Monitoring circulating cells is crucial for assessing cancer metastasis and evaluating the efficacy of chimeric antigen receptor (CAR) T-cell therapies. Traditional blood-draw methods face challenges such as discontinuous monitoring and potential cell degradation, leading to inaccurate estimations. In vivo flow cytometry (IVFC), which measures real-time cellular response to laser illumination such as fluorescence, presents a viable alternative. However, its application in humans has been limited by the bulky design of existing devices and configurations unsuitable for larger organisms. This thesis introduces a novel, wearable fluorescence IVFC device tailored for human use, featuring a compact laser diode and silicon photomultiplier (SiPM) to enhance portability and functionality. The device includes a specialized optical system similar to a fluorescent microscope, which optimizes the signal- to-noise ratio by maximizing cellular fluorescence and minimizing background interference. Experimental determination of the limit of detection (LOD) for the SiPM and device establishes their detection capabilities and operational stability. Theoretical evaluations confirm that while the device can detect individual fluorescent cells in vitro, its current configuration does not support this sensitivity in vivo. The thesis also proposes strategies to improve the device’s sensitivity, aiming for reliable in vivo detection of single fluorescent cells.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator</title>
<link href="https://hdl.handle.net/1721.1/159368" rel="alternate"/>
<author>
<name>Lee, Giho</name>
</author>
<id>https://hdl.handle.net/1721.1/159368</id>
<updated>2025-07-09T03:13:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Geometrically Programmed Nano-Resistors for Ultra-Robust Artificial Neural Network Accelerator
Lee, Giho
Despite the transformative advance in artificial intelligence (AI), the AI processing hardware have not matched the speed and power-efficiency requirement, restricting the realization of the full potential of AI and requiring innovation in AI hardware. Data transmission bottleneck between memory and processor has been pointed out as main source of poor computing speed and power efficiency. By embeding neural weights in hardware to minimize data transmission, non-volatile memory (NVM)-based in-memory computing have expected to have several orders of speed and power-efficiency boosts. However, its practical implementation as a next generation AI hardware has been not successful due to the non-idealities in NVMs including unstability, poor state resolution, challeng in programming, and systemon-a-chip (SoC) incompatibility. This thesis introduces ultra-accurate and ultra-robust geometrically programmed nano-resistor (GPNR) that can overcome NVM non-idealities and enable commercial AI accelerator based on analog in-memory computing. The state-of-theart 6-bit conductance state resolution and 8-bit stability of nano-resistor was realized by channel geometry optimization and thermodynamically stable material, while SoC imcompatible programming in NVM devices is omited. To evaluate the computing performance, experimental vector-matrix multiplication (NVM) operation were performed, showing 5-bit accuracy operation with 28x28 GPNR array without selectors. Finally, AI inference simulation was performed with simplifed 5x5 cropped MNIST digit image classification task. GPNR-based final classification layer demonstrates 91.0 % accuracy, comparable to the software limit of 93.2 %. The outcomes of this research not only bolster the feasibility of GPNR technology in practical applications but also highlight the potential for future advancements in AI accelerators that can fully harness the capabilities of analog in-memory computing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proton Exchange Membrane Electrolysis Applied to the Dehydration of Cow Milk</title>
<link href="https://hdl.handle.net/1721.1/159366" rel="alternate"/>
<author>
<name>Morice, Peter G.</name>
</author>
<id>https://hdl.handle.net/1721.1/159366</id>
<updated>2025-07-09T03:13:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Proton Exchange Membrane Electrolysis Applied to the Dehydration of Cow Milk
Morice, Peter G.
The dehydration of cow milk to powder form extends product shelf life and reduces product shipping costs and emissions. However, the existing thermal methods commonly employed by the dairy industry produce harmful emissions in the combustion of fossil fuels. This work explores the potential role of an electrochemical alternative method of proton exchange membrane (PEM) electrolysis in the process of concentrating milk solids. Although the thermodynamic specific energy of electrolysis at [mathematical notation] is high compared to existing thermal methods around [mathematical notation], experimental results for PEM electrolysis assisted by mechanical centrifugation suggest a specific energy closer to [mathematical notation] is possible. The energy competitive PEM electrolysis method has the additional benefit of zero emissions when supplied by renewable energy sources. Analysis of milk solids processed by the electrolysis assisted method shows promising levels of high fat, mineral, and total protein content, with liquid chromatography quantifying both casein and whey protein types retained in the solid product.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design/System Technology Co-optimization of Gallium Nitride High Electron Mobility Transistors for Next-G 3DIC Heterogeneous Integration of Gallium Nitride and Si CMOS</title>
<link href="https://hdl.handle.net/1721.1/159364" rel="alternate"/>
<author>
<name>Yadav, Pradyot Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/159364</id>
<updated>2025-07-09T03:14:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design/System Technology Co-optimization of Gallium Nitride High Electron Mobility Transistors for Next-G 3DIC Heterogeneous Integration of Gallium Nitride and Si CMOS
Yadav, Pradyot Singh
With data rates pushing into the Tbps, there is an urgent need for the use of mmWave and subterahertz RF front ends and transistors. Gallium Nitride (GaN) transistors have continued to push the limits of high-power density, high frequency semiconductor devices. The future of GaN radio frequency (RF) circuit technology is at the intersection of device engineering, advanced packaging, and circuit design. Currently, these are three separate fields with little-to-no communication between them, resulting in critical limitations to today’s technology. These fields need to collaborate, crosspollinate, and intersect in order to modernize and advance innovation for the next generation of RF front ends. To design the most efficient W-G Band devices and systems, we must embrace a design/system-technology co-optimization (DTCO/STCO) approach, that combines innovative GaN transistors with engineered linearity, novel heterogeneous integration with state-of-the-art Silicon (Si) bias and control circuitry, and advanced physics-based modeling. This thesis presents the development of a 3DIC consisting of GaN HEMTs and Si CMOS BEOL, in particular W-band GaN HEMTs, Si CMOS BEOL circuits in Intel16, and advanced packaging of dielets.  The full chip continuum is investigated and innovated upon.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Al-Ni Nanofilm Powered Miniature Linear Actuator for Medical Devices</title>
<link href="https://hdl.handle.net/1721.1/159363" rel="alternate"/>
<author>
<name>Cotey, Samuel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159363</id>
<updated>2025-07-09T03:13:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Al-Ni Nanofilm Powered Miniature Linear Actuator for Medical Devices
Cotey, Samuel A.
Amedical device is sought to improve drug delivery options available to healthcare providers and patients; our initial focus is to develop a piston that can provide the power necessary to do an injection from an ingestible device. While many methods to administer drugs currently exist, the administration method in many cases is largely driven by factors that supersede ease, convenience, or comfort for the patient [1]. Many patients are saddled by cumbersome drug regimens that expose them to the risk of complex and painful drug administration paths and dependence on medical sharps [2, 3]. For these patients, being able to take injectable drugs orally allows them to use what appears to them to be simple, traditional drug delivery methods in lieu of injections that are painful and inconvenient. In order to perform an injection with a device that fits within an ingestible form factor, a novel piston is required. A concept design for an Al-Ni nanofilm powered miniature linear actuator has been developed in order to perform jet injections from within the gastrointestinal anatomy of a patient. This actuator consists of a small pressure vessel filled with liquid alcohol that undergoes a phase change to gas and generates pressure that can be used to cycle a piston in a drug loaded cylinder. Via exothermic reaction, nanofilm deposits thermal energy into the alcohol filled pressure vessel in order to generate the pressure needed to perform a jet injection. Cylindrical pressure vessel chambers with a diameter of 7mm and heights ranging from 3mm to 7.5mm were 3D printed and used to measure peak internal pressure vessel pressure as well as work output. The piston was used to push incompressible fluid through a nozzle in order to characterize the actuator’s work output. By using Bernoulli’s Equation, pressure on the piston head as a function of piston location along the stroke length was determined to characterize actuator performance as a function of pressure vessel size. The pressure vessel and the piston were modeled theoretically and empirically in order to identify the relevant design parameters so the piston can be effectively incorporated into the overall injection device.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Droplet Based Microalgae Photobioreactor for Biofouling Prevention</title>
<link href="https://hdl.handle.net/1721.1/159361" rel="alternate"/>
<author>
<name>Callan, Tess A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159361</id>
<updated>2025-07-09T03:14:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Droplet Based Microalgae Photobioreactor for Biofouling Prevention
Callan, Tess A.
Microalgae have a wide variety of applications aiding in sustainability, yet during the cultivation process, photobioreactor biofouling remains an issue. It blocks light from entering the reactor and necessitates reactor cleaning, ultimately reducing overall reactor productivity and increasing cultivation costs. Here we investigate a new type of reactor that removes the possibility of biofouling by growing the algae in aqueous droplets surrounded by oil that preferentially wets the reactor surface. We first look into growing the algae in droplets and discuss major parameters that will be impacted. Then, we show a droplet-based reactor that demonstrates the potential to scale the system with similar growth rates to industry. Finally, we investigate the impact on major costs to confirm the economic viability of transitioning to this reactor. Overall savings in the cultivation process, mainly from power reduction and biofouling prevention, are shown.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston</title>
<link href="https://hdl.handle.net/1721.1/158849.2" rel="alternate"/>
<author>
<name>Proman, Zachary D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158849.2</id>
<updated>2025-06-17T03:13:05Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston
Proman, Zachary D.
This development and business plan considers the neighborhood context and current market conditions characterizing the subject site’s redevelopment potential. The subject site, further defined in this thesis, is a prime parcel of land in the South Boston neighborhood of Boston, MA currently improved and used for quick-serve restaurant operations. Proximate to the Seaport, Fort Point, and Dorchester, South Boston is surrounded by demand drivers resulting in explosive growth that make it one of the most desirable and expensive housing submarkets in the entire City of Boston. Development considerations are fully defined in the report including zoning, equity, financial projections, ground lease, and market-level factors. A conclusion is made on the feasibility of the proposed project with recommendations for next steps resulting from the modeled base-case scenario. Market assumptions and any unresolved development issues are clearly identified and discussed.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of four independently-actuated, complementary-sized control valves and nozzle groups to improve steam turbine efficiency</title>
<link href="https://hdl.handle.net/1721.1/159328" rel="alternate"/>
<author>
<name>Yeaple, Thomas L.</name>
</author>
<id>https://hdl.handle.net/1721.1/159328</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Optimization of four independently-actuated, complementary-sized control valves and nozzle groups to improve steam turbine efficiency
Yeaple, Thomas L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Bibliography: leaf 79.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An experimental investigation of magnet geometry and hysteresis on simultaneous lift and guidance ferromagnetic suspensions</title>
<link href="https://hdl.handle.net/1721.1/159325" rel="alternate"/>
<author>
<name>Farley, Holt Leonard.</name>
</author>
<id>https://hdl.handle.net/1721.1/159325</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">An experimental investigation of magnet geometry and hysteresis on simultaneous lift and guidance ferromagnetic suspensions
Farley, Holt Leonard.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The impacts of labor-intensive roads in Colombia : a framework for analysis, research design and preliminary test.</title>
<link href="https://hdl.handle.net/1721.1/159323" rel="alternate"/>
<author>
<name>Borrero Mutis, Santiago.</name>
</author>
<id>https://hdl.handle.net/1721.1/159323</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The impacts of labor-intensive roads in Colombia : a framework for analysis, research design and preliminary test.
Borrero Mutis, Santiago.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1978; Bibliography : leaves 150-153.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Winter weather types of the eastern North Pacific and adjacent coastal and island areas</title>
<link href="https://hdl.handle.net/1721.1/159313" rel="alternate"/>
<author>
<name>Kosco, George Francis.</name>
</author>
<author>
<name>Dorsett, John O. F.</name>
</author>
<id>https://hdl.handle.net/1721.1/159313</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1940-01-01T00:00:00Z</published>
<summary type="text">Winter weather types of the eastern North Pacific and adjacent coastal and island areas
Kosco, George Francis.; Dorsett, John O. F.
Thesis: M.S., Massachusetts Institute of Technology, Department of Meteorology, 1940; Includes bibliographical references (leaves [44]-[45]).
</summary>
<dc:date>1940-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aircraft leasing and airline corporate strategy</title>
<link href="https://hdl.handle.net/1721.1/159302" rel="alternate"/>
<author>
<name>Setyopurnomo, Rudy.</name>
</author>
<id>https://hdl.handle.net/1721.1/159302</id>
<updated>2025-12-06T03:20:32Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Aircraft leasing and airline corporate strategy
Setyopurnomo, Rudy.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaves 130-131).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An interactive statistics package for the social sciences.</title>
<link href="https://hdl.handle.net/1721.1/159299" rel="alternate"/>
<author>
<name>Lebling, Peter David.</name>
</author>
<id>https://hdl.handle.net/1721.1/159299</id>
<updated>2025-12-06T03:20:35Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">An interactive statistics package for the social sciences.
Lebling, Peter David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1973; Bibliography: leaf 92.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>International political behavior: historical analysis of Scandinavia and the Netherlands.</title>
<link href="https://hdl.handle.net/1721.1/159298" rel="alternate"/>
<author>
<name>Deber, Raisa Rebecca Sarah Berlin.</name>
</author>
<id>https://hdl.handle.net/1721.1/159298</id>
<updated>2025-12-06T03:20:34Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">International political behavior: historical analysis of Scandinavia and the Netherlands.
Deber, Raisa Rebecca Sarah Berlin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1971; Bibliography: leaves 176-185.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of dietary protein deficiency during different stages of pregnancy on fetal development and maternal body composition and behavior</title>
<link href="https://hdl.handle.net/1721.1/159296" rel="alternate"/>
<author>
<name>Zartarian, Gary Michael.</name>
</author>
<id>https://hdl.handle.net/1721.1/159296</id>
<updated>2025-12-06T03:20:37Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">Effects of dietary protein deficiency during different stages of pregnancy on fetal development and maternal body composition and behavior
Zartarian, Gary Michael.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1979; Includes bibliographical references.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An examination of private capital available to the railroad industry.</title>
<link href="https://hdl.handle.net/1721.1/159295" rel="alternate"/>
<author>
<name>Wait, Barbara Rust.</name>
</author>
<id>https://hdl.handle.net/1721.1/159295</id>
<updated>2025-12-06T03:20:36Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">An examination of private capital available to the railroad industry.
Wait, Barbara Rust.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1979; Bibliography: leaves 103-106.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of eddies on fCO₂ in the North Pacific surface ocean</title>
<link href="https://hdl.handle.net/1721.1/159266" rel="alternate"/>
<author>
<name>Padalino, Christine</name>
</author>
<id>https://hdl.handle.net/1721.1/159266</id>
<updated>2025-12-06T03:20:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The effect of eddies on fCO₂ in the North Pacific surface ocean
Padalino, Christine
We investigate the impact of mesoscale eddies in the North Pacific on surface ocean fCO₂ using the in-situ measurements from the Surface Ocean CO₂ Atlas (SOCAT) to inform the importance of the mesoscale dynamics on global CO₂ fluxes. We sort SOCAT measurements from 2000-2019 by whether or not they are in an eddy, per- form basin scale analysis, and present case studies. The results show lower fCO₂ in both anticyclones and cyclones compared to the background ocean, with the mag- nitude of the anomaly varying seasonally and spatially. Due to the many potential mechanisms of the eddy impacts, we analyze a temperature normalized fCO₂ to tease apart the impact of altered temperature from a biological response or mixing. With this method, we find evidence that eddies are increasing the background biological activity. To further attempt to separate the different effects eddies could have on sur- face fCO₂ and CO₂ fluxes, we identify two long lived eddies with many measurements over their lifetimes to use as case studies. We find that both the anticyclonic and cyclonic eddy initially increase fCO₂, but at the end of the lifetime mixing likely plays a role in counteracting temperature effects. The investigation of the varying effects the mesoscale can have on CO₂ fluxes not only allows for a better understanding of how eddies will affect surface fCO₂ but also provides insight into the potential impact on global scale estimates. Our analysis shows that on average, while mesoscale eddies modulate surface ocean fCO₂, they do not have a detectable enhancement of the CO₂ flux in the North Pacific.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nothing Unwanted: Prototyping Matter out of Place</title>
<link href="https://hdl.handle.net/1721.1/159265" rel="alternate"/>
<author>
<name>Wang, Yiqing</name>
</author>
<id>https://hdl.handle.net/1721.1/159265</id>
<updated>2025-12-06T03:20:29Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Nothing Unwanted: Prototyping Matter out of Place
Wang, Yiqing
What we discard never truly disappears.  &#13;
&#13;
Accompanying the societal shift from post-war scarcity to a consumerist culture, the contemporary building industry relies on abundant virgin materials, machinery, and a global transportation network. Immersed in this culture of convenience, architecture has limited agency to engage responsibly and intimately with reclaimed materials. The design of waste, inevitably, often symbolizes the separation between society and its waste, marked by an intention to remove, re-form, and re-standardize. Zero-waste systems and circular economy often inadvertently create hidden wastes, labor, and carbon footprints, leading to an uneven distribution of environmental harms.  &#13;
&#13;
The thesis explores the unique materiality of municipal waste, linking human living with their unwanted with an architectural prototype. The new "unwanted" architecture integrates local waste into an adaptive inventory, avoiding over-precision, over-purification, and over-modularization. Based on the characteristics of US municipal waste, local-sourced garbage, including e-waste, plastics, wood, paper, metal, dust, and food waste, is studied, calibrated, and assembled to create building components and rooms. The bottom-up approach offers a way to compute heterogeneous materials with digital methods and low-tech on-site operations to minimize environmental impact. The richness of space blurs the boundaries between domesticity and abjection and between the sublime and the disgusting. &#13;
&#13;
The prototypes aim to rebuild both the Functional and Emotional Unwanted and re-imagine a scalable and operable building system. The design contrasts the previously visible waste in architectural design with today's invisible waste stream due to sophisticated waste management. It demonstrates an intimate approach to the gigantic amount of urban waste, emphasizing its cultural, personal, and collective dimensions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mobile Multi-Bounce LiDAR</title>
<link href="https://hdl.handle.net/1721.1/159263" rel="alternate"/>
<author>
<name>Somasundaram, Siddharth</name>
</author>
<id>https://hdl.handle.net/1721.1/159263</id>
<updated>2025-12-06T03:20:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mobile Multi-Bounce LiDAR
Somasundaram, Siddharth
Single-photon avalanche diodes (SPADs) are emerging sensors that can measure the propagation of light in a scene, capturing higher-order reflections, shadows, and light transport that ordinary cameras are unable to. Measurement of these multi-bounce light paths is especially useful for non-line-of-sight (NLOS) imaging. The increasing availability of SPAD sensors on mobile devices (e.g. iPhone Pro LiDAR) raises the potential to enable NLOS capabilities on consumer devices in the future. Currently, these sensors are primarily employed for LiDAR-based depth estimation, with untapped potential in other applications. In light of recent advances in SPAD device development, the timing is opportune to revisit the applicability of multi-bounce LiDAR techniques on consumer-grade mobile devices.&#13;
&#13;
This thesis extends the applicability of multi-bounce LiDAR techniques from research-grade SPAD hardware to consumer-grade mobile LiDARs. First, we enable single-shot capture of two-bounce signals and remove the need for laser scanning by developing a tomographic formulation for two-bounce non-line-of-sight imaging. Second, we enable real-time non-line-of-sight capture at eye-safe laser power under object and camera motion. Our approach is inspired by principles from burst photography. &#13;
&#13;
We implement and evaluate the proposed algorithms in simulations and on experimental SPAD hardware. We also demonstrate real-time non-line-of-sight tracking on a consumer-grade smartphone LiDAR. Potential future applications of our results include "X-ray vision" in AR/VR, full-body tracking for AR headsets, room scanning for hard-to-reach areas, collision avoidance for autonomous vehicles, and robotic navigation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward the Understanding of Brain’s Molecular Language</title>
<link href="https://hdl.handle.net/1721.1/159206" rel="alternate"/>
<author>
<name>Zoghi Tavana, Sara</name>
</author>
<id>https://hdl.handle.net/1721.1/159206</id>
<updated>2025-11-20T03:14:57Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Toward the Understanding of Brain’s Molecular Language
Zoghi Tavana, Sara
What underlies the extraordinary capacity of neurons to process information, form memories, and orchestrate complex behaviors? Over a century of research has established that proteins are the central functional molecules of the cell, yet translating this knowledge into an understanding of emergent neural phenomena and effective treatments for neurological disorders remains elusive. We argue that this paradox stems from studying proteins in isolation, overlooking how their function is fundamentally shaped by spatial context and interactions with DNA, RNA, other proteins, lipids, carbohydrates, and metabolites. This coordinated&#13;
molecular interplay, we posit, ultimately gives rise to the complex neural circuits and behaviors observed in higher organisms. Intriguingly, Alfred Binet foreshadowed this perspective as early&#13;
as 1889 when he suggested that even simple, single-celled organisms—lacking anatomically defined nervous systems—might harbor a "diffuse nervous system" of molecular interactions&#13;
within their cytoplasm enabling complex behaviors. However, the historical progression of neuroscience, largely dictated by available methodologies and oscillating between siloed reductionist molecular approaches and systems-level analyses, has not yet been able to fully capture this intricate molecular choreography underlying neural function. In this review, we examine how studying molecular species in isolation, while yielding important insights, has ultimately proven insufficient for understanding emergent neural functions. We propose that recent technological advances in expansion microscopy, molecular anchoring, machine learning-enabled&#13;
protein detection, and cryo-fixation now make it possible to map molecular networks in their native context. This integrative approach promises to illuminate the molecular "language" of the brain, shedding light on how collective interactions among biomolecules&#13;
give rise to neuronal emergent abilities—and guide future therapeutic innovations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanism of hydrolysis of triphenylsilyl fluoride</title>
<link href="https://hdl.handle.net/1721.1/159195" rel="alternate"/>
<author>
<name>Esteve Campderá, Ramón María.</name>
</author>
<id>https://hdl.handle.net/1721.1/159195</id>
<updated>2025-11-20T03:14:58Z</updated>
<published>1948-01-01T00:00:00Z</published>
<summary type="text">Mechanism of hydrolysis of triphenylsilyl fluoride
Esteve Campderá, Ramón María.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1948; Includes bibliographical references (leaves 32-33).
</summary>
<dc:date>1948-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extractive and azeotropic distillation</title>
<link href="https://hdl.handle.net/1721.1/159194" rel="alternate"/>
<author>
<name>Hughes, Richard R.</name>
</author>
<id>https://hdl.handle.net/1721.1/159194</id>
<updated>2025-11-20T03:14:59Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Extractive and azeotropic distillation
Hughes, Richard R.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1947; Bibliography: leaves 94-95.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Railroad reliability and freight car utilization : an assigned fleet model.</title>
<link href="https://hdl.handle.net/1721.1/159190" rel="alternate"/>
<author>
<name>Assarabowski, Richard John.</name>
</author>
<id>https://hdl.handle.net/1721.1/159190</id>
<updated>2025-11-20T03:15:01Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Railroad reliability and freight car utilization : an assigned fleet model.
Assarabowski, Richard John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 132-133.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Brewing Resilience: A Case Study in Adapting Small Business Strategy with Systems Thinking</title>
<link href="https://hdl.handle.net/1721.1/159152" rel="alternate"/>
<author>
<name>Jones, Andrew C.</name>
</author>
<id>https://hdl.handle.net/1721.1/159152</id>
<updated>2025-11-12T05:01:55Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Brewing Resilience: A Case Study in Adapting Small Business Strategy with Systems Thinking
Jones, Andrew C.
This thesis explores how systems thinking—a methodology often reserved for large organizations—can be effectively applied to small businesses facing complex challenges. Using Lamplighter Brewing Co., an independent microbrewery in Cambridge, Massachusetts, as a case study, the research examines how the brewery adapted to the disruptions of the COVID-19 pandemic and the evolving economic landscape that followed. It documents the iterative application of systems thinking principles to identify root causes, leverage points, and actionable solutions to address issues such as declining revenue, rising costs, and misaligned organizational structures.&#13;
Lamplighter's interventions ranged from restructuring its management and marketing teams to pivoting its sales and production strategies. By leveraging tools such as causal loop diagrams and stock-and-flow models, the brewery uncovered systemic dynamics driving its performance. The research highlights the importance of iterative learning, targeted interventions, and holistic analysis in fostering resilience and sustainability in resource-constrained environments.&#13;
While focused on the craft brewing industry, the findings offer transferable insights for small businesses in similarly dynamic sectors, demonstrating that systems thinking can empower smaller organizations to navigate complexity, adapt strategically, and thrive amidst uncertainty.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning-Based Classification of Phonotraumatic Vocal Hyperfunction Severity from Stroboscopic Images</title>
<link href="https://hdl.handle.net/1721.1/159150" rel="alternate"/>
<author>
<name>Balaji, Purvaja</name>
</author>
<id>https://hdl.handle.net/1721.1/159150</id>
<updated>2025-11-12T05:01:41Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Deep Learning-Based Classification of Phonotraumatic Vocal Hyperfunction Severity from Stroboscopic Images
Balaji, Purvaja
Phonotraumatic vocal hyperfunction (PVH) is a vocal disorder characterized by damaged vocal folds from excessive or abusive voice use. Clinical assessment of PVH relies on timeconsuming videostroboscopy examination, which poses challenges for large-scale clinical studies. We address the need for more efficient clinical assessment tools by proposing deep learning approaches for automatically detecting PVH severity from stroboscopic images. One of the main challenges in building deep learning models for this task is a lack of labeled stroboscopy data. Motivated by this challenge, we explore two approaches: direct classification and segmentation-then-classification. In the segmentation-then-classification approach, we first train a model to segment the glottis, a clinically relevant part of the vocal fold anatomy. Then, we use the predicted segmentation along with the stroboscopic image as inputs into a classification model. This approach helps to guide the model towards key anatomical features. We achieve up to 0.53 accuracy in four-class PVH severity prediction with the direct classification approach. Incorporating glottal segmentations improves the accuracy to 0.64, underscoring the value of providing anatomically-informed segmentations when assessing PVH severity. By creating an automated PVH severity tool, our work has the potential to help clinicians more efficiently monitor disease progression and to facilitate large-scale screening, thereby contributing to improved patient care.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intuitive Audio Interaction and Control in Multi-Source Environments</title>
<link href="https://hdl.handle.net/1721.1/159149" rel="alternate"/>
<author>
<name>Oduniyi, Erick O.</name>
</author>
<id>https://hdl.handle.net/1721.1/159149</id>
<updated>2025-11-12T05:01:34Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Intuitive Audio Interaction and Control in Multi-Source Environments
Oduniyi, Erick O.
In an increasingly noisy world, managing auditory focus is a persistent challenge. This thesis explores how embodied interactions—primarily head tracking, alongside experiments with gaze tracking, speech commands, and audio-visual segmentation—can enhance user control over complex auditory environments. By linking head orientation to volume adjustments, we investigated whether natural, instinctive movements could serve as intuitive, hands-free mechanisms for isolating and amplifying relevant sounds. User studies revealed that head tracking is effective in structured audio contexts, such as music, where distinct sources are easily separable. However, its utility diminishes in dense, overlapping conversations, highlighting the need for finer control mechanisms. While gaze and segmentation offer promising refinements, cognitive load and system responsiveness remain key challenges. These findings underscore that embodied audio interaction must be adaptive, content-aware, and seamlessly integrated with user intent.This research contributes to human-computer interaction by demonstrating both the potential and limitations of movement-based audio control. Future work should refine multimodal fusion, improve segmentation accuracy, and enhance accessibility to create systems that dynamically respond to users’ natural behaviors—reducing cognitive strain and enabling more fluid, user-centric auditory experiences.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Thread Maturity in Manufacturing: A Cross-Industry Study Using the Model-Based Enterprise Capability Assessment Framework</title>
<link href="https://hdl.handle.net/1721.1/159146" rel="alternate"/>
<author>
<name>Peters, Michael Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/159146</id>
<updated>2025-11-12T05:01:12Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Digital Thread Maturity in Manufacturing: A Cross-Industry Study Using the Model-Based Enterprise Capability Assessment Framework
Peters, Michael Scott
Modern-day manufacturing organizations find themselves in volatile and competitive markets with increasing pressure to deliver products faster, at lower cost, and with increased quality. In response to this pressure, many organizations are considering how technological advancements may improve the efficiency of their product development operations. Leading organizations have digitally transformed their businesses by shifting away from manual processes, static documents, and siloed operations toward automation, model-based data, and interconnectivity enabled by a digital thread. Accordingly, organizations pursuing the competitive edge offered through the digitalization of their business operations have often used different assessment tools to benchmark their current capabilities and define their vision for the future of their organizational operations.&#13;
&#13;
This thesis proposes a set of model-based and digital thread capabilities that are central to the long-term success of product development operations, along with a corresponding maturity model that may be used to identify gaps between current- and future-state capability implementation. Using the proposed capability maturity model, known as the Model-based Enterprise Capability Assessment Framework (MECAF), this study evaluated and compared capability maturity across various organizations in the Aerospace and Defense, Automotive, and Heavy Machinery industries. Through interviews with each participating organization, this thesis also explores the expected benefits, common challenges, and anticipated value of implementing model-based capabilities. Additionally, this thesis proposes an approach to bridging the gap from strategy to implementation based on the lessons learned and best practices of the organizations studied.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Productivity in the Workplace for Product Development Teams</title>
<link href="https://hdl.handle.net/1721.1/159145" rel="alternate"/>
<author>
<name>Farfan Perdomo, Jorge</name>
</author>
<id>https://hdl.handle.net/1721.1/159145</id>
<updated>2025-11-12T05:01:01Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Productivity in the Workplace for Product Development Teams
Farfan Perdomo, Jorge
Productivity is a measure of the value generated for every hour worked. In a product development team, productivity can be affected by endogenous and exogenous factors, such as biological rhythms, work style, availability, work interruptions, team size, location, and the management strategies taken in a project. These factors will have an effect on the amount of effective work value generated in a workweek.&#13;
&#13;
A mathematical model and a Monte Carlo simulation were used to quantitatively assess the impact of these factors on the estimated cost and duration of a product development project. Based on the model results, we determined that workweek capacity and interruptions in the workplace are central to productivity. In addition, we demonstrated that combining different management strategies could be used to bring the project back on schedule and within budget to reduce the effects of these inefficiencies due to diverse endogenous and exogenous factors.&#13;
&#13;
For these reasons, this case study on a product development project will provide insight to engineering managers and project leaders about the effects of these inefficiencies in the workplace. The findings will help pave the way toward a more accurate project estimation and better modeling of project dynamics to reduce the amount of uncertainty in product development teams.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of a Nonblocking Randomized Work Stealing Scheduler</title>
<link href="https://hdl.handle.net/1721.1/159144" rel="alternate"/>
<author>
<name>Ali, Sabiyyah</name>
</author>
<id>https://hdl.handle.net/1721.1/159144</id>
<updated>2025-11-12T05:00:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Implementation of a Nonblocking Randomized Work Stealing Scheduler
Ali, Sabiyyah
This thesis presents FLCN (Free of Locks, Cilk is Now), a nonblocking work-stealing runtime scheduler that supports Cilk multithreaded programming. The existing OpenCilk runtime system uses lock-based synchronization and thus suffers from lock contention, does not provide progress guarantees, and can experience performance degradation with high worker counts and in multiprogrammed scenarios. FLCN leverages the existing runtime system’s provably efficient scheduling algorithm and introduces several new data structures and concurrency protocols to form a correct and performant lock-free system. In addition to enabling fork-join task parallelism, FLCN supports other Cilk features such as reducer hyperobjects. Through analyzing the performance of FLCN on various canonical benchmark programs, I find that for programs with low amounts of work, FLCN performs worse than the existing runtime. However, for most programs, I find that FLCN is either competitive with or marginally outperforms the existing runtime. Additionally, FLCN consistently exhibits higher scalability than the existing runtime, performing especially better when using hyperthreads and in multiprogrammed environments. I also outline future work that could make FLCN a more comprehensive and performant system, including ideas for improving FLCN’s work efficiency that would in turn better its performance on programs with low amounts of work.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Role of Foundation Models for Training Generalist Robot Learning Policies</title>
<link href="https://hdl.handle.net/1721.1/159143" rel="alternate"/>
<author>
<name>Feng, Eugenia Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/159143</id>
<updated>2025-11-12T05:00:31Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Exploring the Role of Foundation Models for Training Generalist Robot Learning Policies
Feng, Eugenia Y.
Numerous methodologies to solving goal-conditioned short-horizon tasks require hundreds of expert demonstrations, but these demonstrations are effort-intensive to collect, reducing the scalability of these approaches. Even with approaches that do work, they may have difficulty generalizing to slightly different settings. In this work, we explore two approaches to training generalist robot learning policies using large-scale foundation models. &#13;
&#13;
The first approach aims to use a video foundation model to generate task-conditioned synthetic demonstrations at scale from a single expert demonstration. The objective is to leverage these synthetic demonstrations as proxy for expert demonstrations to train models that learn rewards from expert videos for solving complex visual RL problems. &#13;
&#13;
The second approach seeks to improve upon the generalization ability of behavior cloning policies. Moving away from the use of videos for training, we explore using privileged representations such as keypoints or object-poses learned using open-set foundation models. By tracking pose or keypoint correspondences, the aim is to minimize the required number of demonstrations to achieve task completion and improve generalization within classes of objects.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prompt Injection Generation Using Small Language&#13;
Models with Reinforcement Learning with Artificial&#13;
Intelligence Feedback</title>
<link href="https://hdl.handle.net/1721.1/159142" rel="alternate"/>
<author>
<name>Gupta, Aneesh</name>
</author>
<id>https://hdl.handle.net/1721.1/159142</id>
<updated>2025-05-20T12:38:32Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Prompt Injection Generation Using Small Language&#13;
Models with Reinforcement Learning with Artificial&#13;
Intelligence Feedback
Gupta, Aneesh
Large language models (LLMs) have become an integral part of many fields from customer support automation to research assistants. However, despite their growing adoption, they face significant challenges, particularly when it comes to safety in sensitive contexts. Existing methods like Reinforcement Learning with Human Feedback (RLHF) and keyword filtering have contributed to improving the robustness of these models, but these approaches are very resource-intensive and the models can still be vulnerable to malicious attacks like prompt injections and jailbreaking. One notable limitation in testing defenses against such attacks is the scarcity of appropriate datasets. This thesis investigates the use of small language models (SLMs) to generate goal hijacking messages, a subset of prompt injection messages. Techniques such as LoRA fine-tuning and full fine-tuning of even smaller models are employed in this short form text generation model. We also introduce a fine-tuned SLM enhanced with Reinforcement Learning with Artificial Intelligence Feedback (RLAIF), which removes reliance on slow human feedback by using faster AI-generated feedback instead. By optimizing the reference model and reward functions, we improve alignment with ground truth prompt injection messages while addressing issues such as mode collapse and overfitting. These findings show promise, and further research is necessary to determine how well the approach can generalize to other domains and perform in real-world scenarios. Future work is likely to focus on multilingual datasets and distributed computation to further extend the applicability and efficiency of the method.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Diffusion Models to Enable Efficient Sampling for Task and Motion Planning on a Panda Robot</title>
<link href="https://hdl.handle.net/1721.1/159141" rel="alternate"/>
<author>
<name>Johnson, Quincy</name>
</author>
<id>https://hdl.handle.net/1721.1/159141</id>
<updated>2025-11-12T05:00:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Learning Diffusion Models to Enable Efficient Sampling for Task and Motion Planning on a Panda Robot
Johnson, Quincy
A search then sample approach to bilevel planning in the context of task and motion planning is one method of effectively solving multi-step robotics problems. In this planning framework, high-level plans of abstract actions are refined into low-level continuous transitions by sampling controller parameters associated with each action. Efficiently sampling these parameters remains a significant challenge, as exhaustive searches often become computational bottlenecks, especially for tasks requiring complex or multimodal parameter distributions. Moreover, relying on samplers hand-designed by humans is both impractical and limiting. To address these challenges, we propose using diffusion models to learn efficient sampling distributions from demonstrations. By avoiding the limitations of hand-specified and naïve sampling methods, our approach enhances planning efficiency and achieves superior performance across diverse tasks that require learning multimodal parameter distributions to solve successfully.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, Design, and Assembly of Spring Tires</title>
<link href="https://hdl.handle.net/1721.1/159140" rel="alternate"/>
<author>
<name>Lu, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/159140</id>
<updated>2025-11-12T05:00:17Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Modeling, Design, and Assembly of Spring Tires
Lu, Michael
With a renewed interest in the Moon and the need for autonomous lunar rovers that drive longer distances and operate over extended durations, designing efficient and robust mobility systems is paramount. Created by NASA Glenn Research Center, the spring tire is a compliant airless tire engineered for planetary rover missions in lunar and Martian environments. It consists of hundreds of coiled springs woven together to create a toroidal-shaped mesh wheel that can deform to uneven terrain, providing additional durability and traction. This work aims to apply this technology to two robotic testbeds: ERNEST, an autonomous lunar traversal rover built at NASA Jet Propulsion Laboratory, and IPEx, a lunar regolith mining robot built at Kennedy Space Center. This thesis discusses the modeling of these spring tires with numerical methods along with the design of two spring tire prototypes for use on the aforementioned rover platforms. A streamlined assembly process for these compliant wheels is also outlined as well as the results of compression testing, rough terrain driving, and drawbar pull testing to assess their performance.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of High Harmonic Fast Waves Interactions in the Scrape off Layer of NSTX-U</title>
<link href="https://hdl.handle.net/1721.1/159138" rel="alternate"/>
<author>
<name>De Levante Rodriguez, Ricardo Antonio</name>
</author>
<id>https://hdl.handle.net/1721.1/159138</id>
<updated>2025-05-20T12:38:29Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Study of High Harmonic Fast Waves Interactions in the Scrape off Layer of NSTX-U
De Levante Rodriguez, Ricardo Antonio
High-harmonic fast wave (HHFW) heating experiments in the National Spherical Torus Experiment (NSTX) at Princeton Plasma Physics Laboratory (PPPL) have shown that up to 60% of the injected power can be lost in the Scrape-Off Layer (SOL) when the fast wave is able to propagate in front of the antenna [Hosea, Phys. Plasmas 15, 056104 (2008))]. This work discusses progress in modeling HHFW propagation and losses in the divertor region using more realistic SOL plasmas in the NSTX-U SOL 2D geometry. Previous RF studies assume density is a function only of magnetic flux, decaying exponentially, which may be insufficient to accurately determine the wavefield, especially in the divertor and high-field side plasma regions. In this work, the temperature profile is first evaluated by solving the non-linear heat conduction equation using a finite element approach in the Petra-M workbench assuming axisymmetry. A 2D density profile is then obtained from a prescribed outer midplane radial profile assuming pressure is uniform on a flux surface. This approach results in density and temperature profiles in which the strong asymmetric nature of diffusion is successfully captured. In particular, it is shown that for a parallel to perpendicular heat conduction anisotropy ratio of up to 10⁸ the expected exponentially decaying temperature profile is obtained using a non-linear iterative solver with proper mesh refinement conditions. Furthermore, this work focuses on investigating the effect of the SOL plasma density profile on the fast-wave propagation at different antenna phasing. The simulation results show that the gradient of the midplane density profile affects the wavefield pattern. As the density profile broadens, the wavefield intensity is reduced in the SOL and increased in the core. Finally, HHFW power in the plasma was studied by adding electron-ion collision power dissipation as a proxy for HHFW power deposition. The simulation results show that increasing the density gap width between the antenna and the core results in more power deposited in the SOL relative to the core.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable and Sustainable Microwave Power Beaming to&#13;
Mobile Lunar Surface Assets</title>
<link href="https://hdl.handle.net/1721.1/159137" rel="alternate"/>
<author>
<name>Ng, Chu Pang Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/159137</id>
<updated>2025-11-12T05:00:04Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Scalable and Sustainable Microwave Power Beaming to&#13;
Mobile Lunar Surface Assets
Ng, Chu Pang Alex
Lunar missions are hindered by the challenges of maintaining continuous operation, especially during the 14-day lunar night, when solar power sources may be unavailable, causing significant mission delays and limiting efficiency. Frequent returns to charging stations supplied by fixed lunar surface power plants further disrupt workflows and restrict the operational range of lunar vehicles. To address these issues and enhance lunar mission performance, a continuous, secure, and shareable power source is essential. While nuclear power and larger battery systems are viable options for continuous lunar energy supply, they pose challenges such as safety risks, complex deployment, and limited scalability. This thesis focuses on exploring microwave-beamed power systems as a flexible and scalable solution for sustained lunar operations. Ideally, the power source would enable 24/7 operations without requiring vehicles to return to base stations, allowing for unrestricted navigation across the lunar surface, including in permanently shadowed regions (PSR). In addition, it would support the construction of critical infrastructure, accelerating the development of the lunar economy. This thesis aims to support sustained lunar exploration and infrastructure development by exploring the design space for microwave-beamed power systems under three different demand use cases of increasing scale, loosely corresponding to the three phases of the Artemis program: Local (Shackleton Crater), Regional (navigation between equatorial regions and South Pole), and Global (entire lunar surface). A case study focused on the YUTU-2 lunar rover investigates alternative architectures for each use case, comparing power beaming from tall towers vs. satellites. Evaluation reveals that the most effective solution for the Local use case is a tower-based approach featuring a single 100m tower, &gt;10,000 solar modules, and using 1 GHz operating frequency, at a cost of $3.4M/W. For the Regional use case, a satellite-based solution is preferred, utilizing 6-7 satellites per plane, 210,000 solar modules, and a frequency range of 1.0 GHz, at a cost of $1.7M/W - $1.8M/W. The Global use case also favors a satellite-based approach, employing 6 satellites per plane across 5 polar planes, with varying numbers of solar modules and utilizing a frequency of 1 GHz, at a cost of $0.8M/W. The trade studies showed that larger receiver antenna areas and lower frequencies improve performance and cost-effectiveness. Furthermore, larger microwave-beamed power systems leverage economies of scale, lowering the cost per watt by an average of $1M/W when scaling from the Regional to the Global power system, with potential for further reductions through future expansions.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Expertise Influence on Teamwork in Sustainable Urban Design Workshops through a System Model</title>
<link href="https://hdl.handle.net/1721.1/159136" rel="alternate"/>
<author>
<name>Li, Chen</name>
</author>
<id>https://hdl.handle.net/1721.1/159136</id>
<updated>2025-05-20T12:38:28Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Detecting Expertise Influence on Teamwork in Sustainable Urban Design Workshops through a System Model
Li, Chen
The design of sustainable urban communities near transportation hubs, such as train stations, may play a vital role in enhancing neighborhoods by fostering new jobs, encouraging mixed-use developments, and promoting a cleaner environment. The engagement of experts and non-experts is often promoted as part of the urban planning process, yet workshops, while motivating, do not necessarily affect the systems design and long-term sustainability of the neighborhood in a substantive way.&#13;
 &#13;
Prior studies present methods for detecting teamwork during the design of complex systems, including model-based co-creation and urban design workshops. While interactive model-based workshops promote increased engagement of non-experts, the traditional role of experts in framing the design options and the workshop dialogue remain. This thesis research seeks to examine how expertise shapes decision-making in urban sustainability contexts using enhanced system models. &#13;
 &#13;
The research approach focuses on sustainable urban design workshops for compact city development, following three key steps.  First, a neighborhood system model incorporating a commute flow simulator is developed to support collaborative exploration and design decision-making processes. Second, during a pilot experimental workshop, participants are divided into control and treatment groups, challenged to design a vibrant community with economic, social, and environmental benefits. The treatment group receives an expert-proposed, advocated solution to assess its impact on exploration and decision-making. Finally, results are analyzed using Large Language Models (LLMs) and statistical methods to assess how expert-driven solutions impact teamwork collaboration, decision-making speed, and final design alignment with the advocated solution.&#13;
&#13;
While the pilot workshop primarily serves to validate the approach and test the methodology, conclusive results cannot be drawn due to its exploratory nature. Nevertheless, this research successfully developed a robust urban design system model, enabling stakeholders to generate innovative solutions that foster a thriving community. Additionally, it established a methodology to advance the understanding of expertise in teamwork dynamics, laying a strong foundation for future studies in teamwork analysis and urban design challenges.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of Multi-Z Impurity Transport in Tokamaks using Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/159133" rel="alternate"/>
<author>
<name>Johnson, Jamal</name>
</author>
<id>https://hdl.handle.net/1721.1/159133</id>
<updated>2025-05-20T12:38:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Investigation of Multi-Z Impurity Transport in Tokamaks using Neural Networks
Johnson, Jamal
Achieving clean, sustainable energy at scale is a pressing global challenge. Fusion of light elements holds significant potential to address this critical need. While only experimental fusion reactors are currently operational, significant progress is being made in the research and design of near-future tokamak fusion power plants. Reactor success will depend on a comprehensive understanding of heat and particle transport, including the role of impurities. This thesis focuses on the development of machine-agnostic neural network surrogates for TGLF, designed to predict impurity transport coefficients alongside heat and electron particle fluxes in DD plasmas. Training data are derived from synthetic fluxes generated for L, H, and I confinement modes in Alcator C-Mod, DIII-D, and ASDEX-Upgrade. To reduce training complexity, shot data are discretized by radius, and networks are developed at six ρ coordinates: 0.2, 0.4, 0.6, 0.7, 0.8, and 0.9. Fifteen plasma parameters are selected as inputs to the neural networks after examining TGLF flux sensitivities across all five output channels. Predicted impurity fluxes for arbitrary charge states and masses, ranging from 4He to 184W, are used to derive diffusive and convective transport coefficients. Three types of synthetic TGLF data are created and applied to network training to produce accurate models. The primary synthetic data type approximates experimental data by sampling within a perturbation range of ±10% around a given shot. Supporting data types enhance network performance by improving trends in single-parameter (1D) scans and addressing areas of highest network uncertainty. Hyperparameter optimization and testing resulted in highly accurate networks. Testing set relative errors averaged over ρ = 0.4–0.7 and 0.9 show approximate deviations of 0.12 ± 0.029 for heat flux and 0.42 ± 0.095 for particle flux channels. However, error metrics at ρ = 0.2 and 0.8 require location-specific tuning and potentially more data to match the accuracy achieved at other radii. The networks are used to analyze boron and carbon impurity peaking within machinespecific H-modes. Their predictions are then compared to published results. Qualitative results for boron peaking correlations in ASDEX-Upgrade are clearly reproduced, while carbon peaking trends in DIII-D are weaker. Sparse DIII-D data, which also includes atypical advanced modes, is believed to have contributed to reduced accuracy in these cases. Using H-mode shots spanning low to high local collisionality, impurity diffusion trends with charge state (Z) in ITG and TEM dominated plasmas were examined, showing good agreement with published studies. Additionally, analysis of network-derived convective transport shows that Z-sensitivity increases with collisionality. Network scans of the ion and electron heat flux responses to temperature gradients also reveal the clear presence of a critical gradient at all radii. These results demonstrate that the neural networks developed in this work can reliably reproduce TGLF results and deliver fast predictions of heat, electron particle, and impurity transport in tokamaks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban Mining &amp; Regenerative E-Waste Ecosystems: Visions towards Sustainable Entrepreneurial Futures for Informal Settlements and Recycling Communities</title>
<link href="https://hdl.handle.net/1721.1/159131" rel="alternate"/>
<author>
<name>Pierre, Georine</name>
</author>
<id>https://hdl.handle.net/1721.1/159131</id>
<updated>2025-05-20T12:38:24Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Urban Mining &amp; Regenerative E-Waste Ecosystems: Visions towards Sustainable Entrepreneurial Futures for Informal Settlements and Recycling Communities
Pierre, Georine
In the face of the growing challenge of urban waste, especially within rapidly expanding informal settlements projected to house over 45% of the global population by 2050 (United Nations Department of Economic and Social Affairs, 2022), innovative solutions are imperative. The thesis proposes a paradigm shift towards urban mining, emphasizing the significant value embedded in discarded electronics—where a tonne of circuit boards can hold ten times more precious metals than traditional ore (Minnesota Center for Environmental Advocacy, 2022). The global distribution of off-shored e-waste has led to the emergence of informal settlements that depend on e-waste recovery to support livelihoods and income generation. These communities have become prime examples for urban mining, embracing circular economic strategies to find adaptive ways to repurpose e-waste. Accra, Ghana’s Old Fadama, home to one of the largest e-waste sites in the world, has become a vital economic hub for informal e-waste processing.  With a population of over 100,000 dwellers, local and migrant workers have built resilient communities through innovative recycling practices, tech repairs, and DIY digital fabrication methods. However, they face imminent environmental risks, health hazards, and displacement threats.&#13;
&#13;
Focusing on Old Fadama, the thesis will address the narratives of urban mining communities and look toward a systematic sympoiesis between economic, environmental, and social realities. By doing so, the thesis seeks to answer how we can foster nurturing and circular relationships for informal settlements and develop regenerative ecosystems for urban mining in the city environment. As an integrated field research, case study, and implementation, the thesis will: conduct key urban analysis for understanding e-waste sites and urban mining communities; identify technology interventions and policy recommendations that can improve local conditions; and utilize data-driven communication to advocate for new opportunities for urban systems tied to e-waste extraction through immersive multimedia as part of a public exhibition.&#13;
&#13;
Using a novel methodology, the thesis adopts the learnings from the economic, physical, and community-based interventions observed in informal e-waste recovery processes. The thesis combines quantitative data from satellite imagery and remote sensing with qualitative insights gathered through crowdsourced GIS mapping, films, interviews, and creative capacity-building workshops. These combined insights aim to enhance urban models, nurturing the innovation potential already present within urban mining communities. The thesis research will contribute to the previous work of MIT City Science Group’s “Power of Without” initiative, a comprehensive roadmap for understanding and collaborating with informal settlements and proposing non-Western decentralized infrastructure solutions. The thesis aims to provide practical insights for implementing innovations in urban mining communities by developing sustainable e-waste recovery strategies and supporting micro-industries in cities, which could serve as a model for similar contexts globally.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Blockchain Technology for Enhancing Genomic Data Management: A Multidisciplinary Framework for Privacy, Trust, Identity Protection, and Equity</title>
<link href="https://hdl.handle.net/1721.1/159129" rel="alternate"/>
<author>
<name>Niu, Yuner A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159129</id>
<updated>2025-05-20T12:38:18Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Leveraging Blockchain Technology for Enhancing Genomic Data Management: A Multidisciplinary Framework for Privacy, Trust, Identity Protection, and Equity
Niu, Yuner A.
The effective adoption of blockchain technology in genomic data management is influenced not only by its technical advantages but also by external factors such as regulatory conditions, and the demands of consumers and patients. This thesis explores the critical factors required for blockchain platforms to thrive in managing genomic data, focusing on how these systems can be structured to address the high-priority needs of various stakeholders, including patients, healthcare providers, regulators, and researchers. Through a comprehensive examination of privacy, security, regulatory compliance, and equity concerns, the research develops a multidisciplinary framework that balances technological innovation with real-world stakeholder expectations. By conducting an in-depth stakeholder analysis and analyzing existing blockchain platforms used for genomics, the thesis presents a roadmap for creating blockchain solutions that are both technologically viable and aligned with the complex social, legal, and ethical landscape of genomic data management. This framework aims to maximize value for all stakeholders while mitigating associated risks, positioning blockchain as a viable tool in the future of personalized medicine.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Representation Learning for Predicting Genetic Perturbation Effects on Single Cells</title>
<link href="https://hdl.handle.net/1721.1/159128" rel="alternate"/>
<author>
<name>Liu, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/159128</id>
<updated>2025-05-20T12:38:17Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Causal Representation Learning for Predicting Genetic Perturbation Effects on Single Cells
Liu, Emily
Advances in sequencing technologies have significantly deepened our understanding of gene regulation in cells. Among these, Perturb-seq has emerged as a powerful technique, enabling high-resolution profiling of transcriptomic responses to genetic perturbations at the single-cell level. Such insights have profound implications for functional genomics and the identification of therapeutic targets. This thesis investigates the efficacy of mechanistic computational models for predicting the effects of previously unseen genetic perturbations on cellular expression profiles. While existing deep learning approaches excel at interpolating within observational data, they often struggle to extrapolate to novel perturbations. To address this limitation, this study introduces a hybrid framework that integrates a linear causal model, grounded in the gene regulatory network, with variational deep learning techniques.&#13;
&#13;
The proposed mechanistic model utilizes a learned gene regulatory network to represent perturbational effects as shift interventions that propagate through the network. This approach operates within a low-dimensional gene space, effectively capturing the essential information needed to reconstruct full transcriptomic profiles. By incorporating this mechanistic causal model into a variational autoencoder (VAE), the framework generates detailed and comprehensive transcriptomic responses while maintaining the capacity to handle noisy, large-scale single-cell data.&#13;
&#13;
Two deep variational architectures are explored within this framework, corresponding to different output distributions. The single cell variational inference (SCVI) architecture, employing a zero-inflated negative binomial output distribution, demonstrates challenges in learning perturbational data distributions. In contrast, a standard VAE architecture with a Gaussian output distribution on normalized gene expressions, when paired with the structural causal model, achieves superior performance compared to current state-of-the-art methods. This hybrid approach, termed the Single-Cell Causal Variational Autoencoder (SCCVAE), demonstrates robust capabilities in both interpolation and extrapolation.&#13;
&#13;
For observed perturbations, the SCCVAE framework reveals latent representations that identify functional perturbation modules and simulate single-gene knock-down experiments across varying penetrance levels. These findings highlight SCCVAE as a powerful tool for interpreting and predicting perturbational responses at the single-cell level, advancing the integration of causal and variational approaches in computational biology.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum Economic Advantage Calculator: An Extension of the Quantum Tortoise and Classical Hare Framework</title>
<link href="https://hdl.handle.net/1721.1/159127" rel="alternate"/>
<author>
<name>Mejia, Frederick</name>
</author>
<id>https://hdl.handle.net/1721.1/159127</id>
<updated>2025-05-20T12:38:16Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Quantum Economic Advantage Calculator: An Extension of the Quantum Tortoise and Classical Hare Framework
Mejia, Frederick
For some algorithmic problems, quantum computation has the potential to provide enormous speedups over classical computers. However, the drastic slowdowns associated with running error-free quantum hardware make achieving these theoretical advantages challenging. Researchers and industry leaders planning for the future would benefit from understanding when it will be both feasible and advantageous to switch to quantum computing platforms. This thesis builds on the framework by Choi, Moses, and Thompson (2023) to evaluate the feasibility and timeline for achieving Quantum Economic Advantage (QEA)—the point at which quantum hardware can outperform comparably-priced classical machines for specific computational tasks. This thesis substantially extends and deepens this framework and introduces a calculator to make these analyses accessible. The model incorporates parameters from quantum hardware vendors, such as physical-logical qubit ratios and overall connectivity, alongside the computational complexities of specific problems, to estimate the year of QEA. Most of the parameters in the tool are freely adjustable, allowing users to explore how varying assumptions about quantum improvement and technological advancement influence the projected timeline for QEA.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structure, Function, and Interaction in Protein Language Models</title>
<link href="https://hdl.handle.net/1721.1/159126" rel="alternate"/>
<author>
<name>Zheng, Jared</name>
</author>
<id>https://hdl.handle.net/1721.1/159126</id>
<updated>2025-05-20T12:38:15Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Structure, Function, and Interaction in Protein Language Models
Zheng, Jared
In recent years, transformer architectures have shown remarkable capabilities in learning meaningful representations from text and images. This approach has been extended to the realm of protein sequences through pretrained protein language models, which have excelled in various protein engineering tasks. In this thesis, we investigate a pre-trained protein language model’s ability to predict protein structure and the effects of mutations. For many advanced protein understanding tasks, such as predicting protein function and protein-protein interactions, fine-tuning of the model is essential. We explore methods to fine-tune the Evolutionary Scale Modeling (ESM2) model, a pretrained protein language model, for predicting protein functions structured as Gene Ontology terms and predicting protein-protein interactions. Notably, we develop a novel method of modeling the hierarchy constraint in GO term prediction that improves training convergence and test performance while making the model hierarchically consistent with GO. This research aims to enhance our understanding of protein language models in decoding complex biological information, thereby contributing to advancements in computational biology.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Healthcare Agents: Large Language Models in Health Prediction and Decision-Making</title>
<link href="https://hdl.handle.net/1721.1/159124" rel="alternate"/>
<author>
<name>Kim, Yubin</name>
</author>
<id>https://hdl.handle.net/1721.1/159124</id>
<updated>2025-05-20T12:38:13Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Healthcare Agents: Large Language Models in Health Prediction and Decision-Making
Kim, Yubin
Large Language Models (LLMs) are transforming healthcare, yet utilizing them for clinical applications presents significant challenges. In this thesis, we explore two critical aspects in healthcare AI: (1) leveraging LLMs for multimodal health prediction from wearable sensor data and (2) developing collaborative AI framework for medical decision-making. We first introduce a Health-LLM framework that performs multimodal fusion of temporal physiological signals from wearable devices with contextual metadata to predict health outcomes. By implementing novel context enhancement strategies, our framework demonstrates significant improvements in prediction accuracy across multiple health domains compared to existing benchmarks. Furthermore, we present MDAgents, an adaptive framework that optimizes multi-agent LLM collaboration for complex medical reasoning tasks. MDAgents dynamically configures agent roles and interaction patterns based on task complexity, implementing a hierarchical consensus mechanism that emulates clinical team dynamics. Through comprehensive evaluation on medical diagnosis and reasoning tasks, MDAgents exhibits superior performance in&#13;
multimodal medical reasoning compared to single-agent approaches. Our findings demonstrate that LLMs, when architected for multimodal integration and strategic collaboration, can serve as robust agents in healthcare systems, advancing both preventive medicine through continuous health monitoring and clinical decision support through distributed AI reasoning.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategizing against online learners in normal form repeated&#13;
games</title>
<link href="https://hdl.handle.net/1721.1/159121" rel="alternate"/>
<author>
<name>Assos, Angelos</name>
</author>
<id>https://hdl.handle.net/1721.1/159121</id>
<updated>2025-05-20T12:38:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Strategizing against online learners in normal form repeated&#13;
games
Assos, Angelos
With the advent of machine learning and AI, learning algorithms are becoming more and more prevalent in online learning settings, where sequential decision-making is required. In such settings, the decisions of each agent can affect the utilities (or losses) of the other agents, as well as influence the decisions made by other agents later on in the interaction. Therefore, if an agent is good at anticipating the behavior of the other agents, in particular how they will make decisions in each round as a function of their experience thus far, he could try to judiciously make his own decisions over the rounds of the interaction so as to influence the other agents to behave in a way that ultimately benefits his own utility. In this thesis, we study repeated two-player games involving two agents: a learner, which employs an online learning algorithm to choose his strategy in each round; and an optimizer, which knows the learner’s utility function, parameters and the learner’s online learning algorithm. The optimizer wants to plan ahead to maximize his own utility while taking into account the learner’s behavior. We study this setting in zero-sum and general-sum games. In zero-sum games, we provide algorithms for the optimizer that can efficiently exploit a learner that employs a specific online learning algorithm in discrete and continuous-time dynamics. Specifically, the learner employs the Multiplicative Weights Update (MWU) algorithm for the discrete-time games, and the Replicator Dynamics in the continuous-time games. In general-sum games, we provide a negative result. Our negative result shows that, unless P=NP, there is no Fully Polynomial Time Approximation Scheme (FPTAS) for maximizing the utility of an optimizer against a learner that best responds to the history in each round. We additionally provide exponential-time algorithms that efficiently strategize against a learner that uses MWU, as well as a new way of thinking about strategizing against online learners via calculus of variations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Semantically Grounded, Long Horizon Planning&#13;
and Execution for Autonomous Agents</title>
<link href="https://hdl.handle.net/1721.1/159120" rel="alternate"/>
<author>
<name>Covarrubias, Lucian</name>
</author>
<id>https://hdl.handle.net/1721.1/159120</id>
<updated>2025-05-20T12:38:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enabling Semantically Grounded, Long Horizon Planning&#13;
and Execution for Autonomous Agents
Covarrubias, Lucian
Robots have been playing an ever increasing role in complex environments, often in coordination with teams of systems or humans. Autonomous systems of the future will need to be tightly grounded in the real world, drawing information directly from their environment to develop an understanding of the world. They will need to maintain a semantic understanding of their environment, including the kinds of objects they observe and their relationships to each other. At the same time, they must be able to reason over diverse constraints related to their tasks, such as time limits and resource usage. While there are existing approaches which enable robots to execute tasks with semantic goals, such as finding a certain type of object in a room, they often fail to consider the multitude fo task specific constraints which are vital to robust performance. On the other hand, planners which consider task specific constraints require a human to provide all information about the environment manually. These systems are too cumbersome to model complex tasks, requiring hours of manual effort which is prone to errors. This thesis presents an architecture for semantically grounded planning which leverages the strengths of constraint based planners while automating the environmental modeling step with an advanced semantic perception engine. By automating environmental modeling, we are able to create a system which executes complex semantically grounded tasks such as navigating to certain objects within a certain room, without major user input which is typically required of these systems.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transformers as Empirical Bayes Estimators The Poisson Model</title>
<link href="https://hdl.handle.net/1721.1/159119" rel="alternate"/>
<author>
<name>Jabbour, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/159119</id>
<updated>2025-05-20T12:38:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Transformers as Empirical Bayes Estimators The Poisson Model
Jabbour, Mark
We study the ability of transformers to perform In Context Learning (ICL) in the setting of Empirical Bayes for the Poison Model. On the theoretical side, we demonstrate the expressibility of transformers by formulating a way to approximate the Robbins estimator, the first empirical Bayes estimator for the Poisson model. On the empirical side, we show that transformers pre-trained on synthetic data can generalize to unseen prior and sequence lengths, outperforming existing methods like Robbins, NPMLE, and ERM monotone in efficiency and accuracy. By studying the internal behavior of the representations of the intermediate layers of these transformers, we found that the representation converges quickly and smoothly over the layers. We also demonstrate that it’s unlikely transformers are implementing Robbin’s or NPMLE estimators in context.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lifting 2D Vision Models into Structured Scene Representations</title>
<link href="https://hdl.handle.net/1721.1/159118" rel="alternate"/>
<author>
<name>Tang, George</name>
</author>
<id>https://hdl.handle.net/1721.1/159118</id>
<updated>2025-05-20T12:38:09Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Lifting 2D Vision Models into Structured Scene Representations
Tang, George
Intelligent agents can leverage structured scene representations capable of capturing object compositionality, affordances, and semantics as a world emulator. However, 3D scene data is limited, rendering supervised and self-supervised methods ineffective. Recent advances in 2D foundation models exhibit remarkable performance and generalization. Concurrently, several works have demonstrated lifting feature maps produced by these models into a 3D feature representation. This thesis further explores how lifting can be effectively employed to construct pixel-level fidelity structured scene representations.&#13;
&#13;
Learned scene representations such as NeRF and Gaussian Splatting do not support additional functionality besides novel view rendering. The world is compositional: a scene can be described in terms of objects. Correspondingly, we present a lifting solution for efficient open-set 3D instance segmentation of learned scene representations. Compared to previous approaches, our solution is more than an order of magnitude faster and can handle scenes with orders of magnitude more instances.&#13;
&#13;
Toward identifying affordances, we tackle the problem of zero-shot mesh part segmentation. Learning-based mesh segmentation does not generalize due to a lack of diverse mesh segmentation datasets, while traditional shape analysis methods are overfitted to previous benchmarks. We present a lifting solution for mesh part segmentation that overcomes these limitations, showing comparable performance to top-performing shape-analysis methods on traditional benchmarks while exhibiting much better generalization on a novel mesh dataset curated from an image-to-3D model.&#13;
&#13;
Beyond feature fields, lifting can be used for a variety of applications, including scene understanding and editing. However, current lifting formulations are inefficient and often exhibit additional unintended modifications. To address these deficiencies, we generalize lifting to semantic lifting, which incorporates per-view masks indicating relevant areas. These masks are determined by querying corresponding per-view feature maps derived from feature fields. However, it is impractical to store per-view feature maps, and the scene representations can be expensive to store and query. To enable lightweight, on-demand retrieval of pixel-aligned relevance masks, we introduce a Vector Quantized Feature Field. We demonstrate the effectiveness of semantic lifting with our method on complex indoor and outdoor scenes from the LERF dataset.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Affordance-Based Generation for 3D Generative AI</title>
<link href="https://hdl.handle.net/1721.1/159117" rel="alternate"/>
<author>
<name>Wang, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/159117</id>
<updated>2025-05-20T12:38:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Toward Affordance-Based Generation for 3D Generative AI
Wang, Sean
Recent advances in 3D content creation with generative AI have made it easier to generate 3D models using text and images as input. However, translating these digital designs into usable objects in the physical world is still an open challenge. Since these 3D models are generated to be aesthetically similar to their inputs, the resulting models tend to have the visual features the user desires but often lack the functionality required for their use cases. This thesis proposes a novel approach to generative AI in 3D modeling, shifting the focus from replicating specific objects to generating affordances. We trained models that allow users to create point clouds that satisfy physical properties called affordances, which are properties that describe how an object should behave in the real world. By ensuring that the generated objects have the expected affordances, we explore how existing tools can be augmented to generate 3D objects whose functionality is consistent with their appearances.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Fine-Tuning Techniques for Removing&#13;
Tamper-Resistant Safeguards for Open-Weight LLMs</title>
<link href="https://hdl.handle.net/1721.1/159116" rel="alternate"/>
<author>
<name>Zhang, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/159116</id>
<updated>2025-05-20T12:38:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Exploring Fine-Tuning Techniques for Removing&#13;
Tamper-Resistant Safeguards for Open-Weight LLMs
Zhang, Sarah
Open-source models present significant opportunities and risks, especially in dual-use scenarios where they can be repurposed for malicious tasks via adversarial fine-tuning. In this paper, we evaluate the effectiveness of Tampering Attack Resistance (TAR), a safeguard designed to protect against such adversarial attacks, by exploring its resilience to full-parameter and parameter-efficient fine-tuning. Our experiments reveal that while TAR enhances tamper resistance compared to models without safeguards, it remains susceptible to variability. Specifically, we observe inconsistencies where the same adversarial attack can succeed under some initializations and fail under others. This is a critical security risk as even a single instance of failure can lead to models being exploited for harmful purposes. These findings highlight the limitations of current tamper-resistant safeguards and emphasize the need for more robust safeguards to ensure the safe and ethical deployment of open-source models.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The SpaseCroissant Oven: Automatic Metadata Generation For Open-Source Space Weather Datasets</title>
<link href="https://hdl.handle.net/1721.1/159115" rel="alternate"/>
<author>
<name>Chen, Edenna H.</name>
</author>
<id>https://hdl.handle.net/1721.1/159115</id>
<updated>2025-05-20T12:38:06Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The SpaseCroissant Oven: Automatic Metadata Generation For Open-Source Space Weather Datasets
Chen, Edenna H.
The rise of machine learning (ML) algorithms has led to a parallel rise in ML-ready datasets. A novel metadata schema released by OpenAI and MLCommons called Croissant, which is specifically designed for ML-ready datasets, aims to increase data accessibility, user understanding of data, and accuracy of claims based on data. However, current methods to automatically generate Croissant metadata present difficulties, such as involving manual entries. This can be especially difficult when attempting to preserve information about large ML-ready datasets, which are often derived from large scientific repositories belonging to organizations such as National Aeronautics and Space Administration (NASA). These major scientific repositories provide their own metadata standards, such as NASA’s Space Physics Archive Search and Extract (SPASE) schema but context from this metadata can often be lost during data processing. This thesis presents a novel, improved approach to Croissant metadata generation which involves a hybrid parsing logic and Large Language Model (LLM) inference approach, as well as recommendations for future Croissant standards and SPASE to Croissant schema metadata conversion, that aims to retain this lost context.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applied Plankton Image Classification for Imaging FlowCytobot Data</title>
<link href="https://hdl.handle.net/1721.1/159114" rel="alternate"/>
<author>
<name>Duckworth, Barbara R.</name>
</author>
<id>https://hdl.handle.net/1721.1/159114</id>
<updated>2025-10-20T03:17:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Applied Plankton Image Classification for Imaging FlowCytobot Data
Duckworth, Barbara R.
As the ability to gather vast quantities of data from oceanographic bioimaging sensors increases, so too does the need to process, analyze, and store that data in a consistent, standard way that enables replicability and accessibility for future studies. The Imaging FlowCytobot (IFCB), an automated submersible flow cytometer, produces high resolution images of plankton at rates up to 10 Hz for months or years, resulting in billions of images. This project compares various methods to categorize incoming images of plankton gathered by the IFCB - Convolutional Neural Nets (CNNs), Vision Transformers (ViT), and self-supervised learning (MAE). The benefits and downsides of each model are analyzed and discussed for future IFCB operators to process their data using the methods that best align with their research questions, along with step-by-step explanations about the pros and cons of each method depending on the use case.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Computational Tsirelson's Theorem for All Compiled Nonlocal Games</title>
<link href="https://hdl.handle.net/1721.1/159113" rel="alternate"/>
<author>
<name>Falor, Chirag</name>
</author>
<id>https://hdl.handle.net/1721.1/159113</id>
<updated>2025-05-20T12:38:05Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Computational Tsirelson's Theorem for All Compiled Nonlocal Games
Falor, Chirag
Nonlocal games, defined as cooperative tasks between spatially separated players, have been a foundational tool in the study of quantum advantage and have been useful in classically verifying quantum computations. To address the challenge posed by the spatial separation assumption, Kalai et al. (STOC' 23) introduced a compilation procedure that compiles any nonlocal game into an interactive game between a classical verifier and a computationally bounded quantum prover. This compilation preserves classical soundness and quantum completeness, though quantum soundness has been established only in the asymptotic limit of the security parameter or for specific classes of games. In this work, we advance towards a concrete framework to bound the quantum value of compiled nonlocal games. Building on the notion of nice sum-of-squares certificates, introduced by Natarajan and Zhang (FOCS' 23) to bound the value of the compiled CHSH game, we extend the niceness framework and construct a hierarchy of semidefinite programs that searches exclusively over nice certificates. We show that this hierarchy converges to the optimal quantum value of the game. Additionally, we present a transformation to make any degree-1 sum-of-squares certificate nice. This approach provides a systematic method to reproduce known bounds for special classes of games and showcases the general applicability of the framework to low-degree certificates. Source code: https://github.com/chiragfalor/&#13;
Nice-SoS-SDP
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Instructify: Demystifying Metadata to Visual Instruction Tuning Data Conversion Supplementary Materials</title>
<link href="https://hdl.handle.net/1721.1/159112" rel="alternate"/>
<author>
<name>Hansen, Jacob A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159112</id>
<updated>2025-10-20T03:16:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Instructify: Demystifying Metadata to Visual Instruction Tuning Data Conversion Supplementary Materials
Hansen, Jacob A.
Visual Instruction Tuning (VisIT) data, commonly available as human-assistant conversations with images interleaved in the human turns, are currently the most widespread vehicle for aligning strong LLMs to understand visual inputs, converting them to strong LMMs. While many such VisIT datasets are available, most of them are constructed via ad hoc techniques, separately proposed by different groups, commonly poorly documented, without available (reproducible) code, and employing paid closed-source model APIs like GPT-4, Gemini, or Claud to convert image metadata (labels) to VisIT instructions. This incurs significant cost and difficulty to scale, improve quality, or produce VisIT data for new datasets. In this work, we address these challenges and propose an open and unified recipe and approach, Instructify, for converting available metadata to VisIT instructions using open LLMs. Our multi-stage Instructify features an efficient framework for metadata grouping, quality control, data and prompt organization, and conversation sampling. We show that our approach can reproduce or improve the data quality of the available VisIT datasets when applied to the same image data and metadata sources, improving GPT-4 generated VisIT instructions by ∼3% on average and up to 21% on individual benchmarks using open models, such as Gemma 2 27B and LLaMa 3.1 70B. We further show that our approach enables effective performance scaling (in terms of resulting LMM performance on a large variety of benchmarks) of the produced VisIT data both in terms of quantity and quality. In addition, we explore the impact of multiple factors, including conversation format, base model selection, and resampling strategies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Single-Cell ATAC-Seq for Genomic Language&#13;
Models and Multimodal Foundation Models</title>
<link href="https://hdl.handle.net/1721.1/159110" rel="alternate"/>
<author>
<name>Kim, Dong Young</name>
</author>
<id>https://hdl.handle.net/1721.1/159110</id>
<updated>2025-10-20T03:16:43Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Leveraging Single-Cell ATAC-Seq for Genomic Language&#13;
Models and Multimodal Foundation Models
Kim, Dong Young
Single-cell Assay for Transposase-Accessible Chromatin using sequencing (scATAC-seq) has emerged as a powerful tool for profiling chromatin accessibility at single-cell resolution. By capturing epigenomic landscapes, scATAC-seq provides critical insights into the regulatory elements that govern gene expression. However, the sparsity of scATAC-seq data, resulting from its low sequencing depth relative to the genome’s potential complexity, poses significant challenges for effective and accurate modeling. To advance the utility of scATAC-seq in modern biology, we explore its integration into deep learning frameworks through two innovative applications. First, we demonstrate how incorporating scATAC data enhances the performance of existing genomic language models by providing complementary context about chromatin accessibility. Specifically, we introduce scATAC to improve SegmentNT, a DNA segmentation model that leverages the Nucleotide Transformer (NT) to predict 14 types of genomic and regulatory elements from DNA sequences up to 30kb at single-nucleotide resolution. Second, we introduce a novel multimodal foundation model that extends existing scRNA-seq foundation models by integrating scATAC-seq data. This model captures crossmodal relationships between gene expression and chromatin accessibility, establishing a unified framework that can be fine-tuned for diverse downstream tasks, including cell type classification and cross-modal imputation. Our work highlights the potential of incorporating scATAC-seq data into existing genomics deep learning strategies, providing a framework for integrating regulatory DNA analysis more seamlessly into genomic modeling.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Time Series Anomaly Detection Using Time Series Foundational Models</title>
<link href="https://hdl.handle.net/1721.1/159109" rel="alternate"/>
<author>
<name>Nguyen, Linh K.</name>
</author>
<id>https://hdl.handle.net/1721.1/159109</id>
<updated>2025-10-20T03:16:14Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Unsupervised Time Series Anomaly Detection Using Time Series Foundational Models
Nguyen, Linh K.
The rapid generation of time series data across a wide array of domains—such as finance, healthcare, and industrial systems—has made anomaly detection a critical task for identifying irregular patterns that could signal significant events like fraud, system failures, or health crises. Traditional approaches to time series anomaly detection, including statistical models like ARIMA and deep learning methods, have proven effective but often require an extensive training phase, which can be both data and time-consuming. In recent years, the emergence of foundational models, including large language models (LLMs) and specialized time series models, has opened up new possibilities for anomaly detection. These models, pre-trained on vast and diverse datasets, offer the potential to perform tasks with minimal task-specific training. This thesis investigates the feasibility of leveraging these foundational models for time series anomaly detection, with the aim of determining their effectiveness in detecting anomalies without the traditional training requirements. We also aim to investigate whether foundational models pretrained specifically on time series data yield better results compared to large language models (LLMs) that were not pretrained for time series tasks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>First-Person Teleoperation of a Bimanual Robotic System</title>
<link href="https://hdl.handle.net/1721.1/159108" rel="alternate"/>
<author>
<name>Thakur, Nandini</name>
</author>
<id>https://hdl.handle.net/1721.1/159108</id>
<updated>2025-10-20T03:16:01Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">First-Person Teleoperation of a Bimanual Robotic System
Thakur, Nandini
First-person teleoperation of robots is a large field of research that could serve many benefits for automation. Teleoperation is a popular method to collect demonstrations for imitation learning that are easily learned by the robot, and thus it’s important to create teleoperation systems that are intuitive and enable human-like perception of a scene. Adding a first-person component to basic teleoperation systems is key to improving operators’ visual perception and making teleoperation possible for extended periods of time. Existing teleoperation systems do not integrate elements that provide the operator with a good perception of the task space, such as a first-person VR view and the ability to leverage the neck to search around the space. They rely on techniques such as third-person view of the space, or provide a first-person view but without the ability to move the neck to look around. This thesis proposes a VR-based teleoperation system with an actuated 5-DoF neck for enabling human-like perception and improving the ability to perform high quality demonstrations for use in imitation learning.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable Embedded Tiny Machine Learning (SETML): A General&#13;
Framework for Embedded Distributed Inference</title>
<link href="https://hdl.handle.net/1721.1/159107" rel="alternate"/>
<author>
<name>Vidal, Justice</name>
</author>
<id>https://hdl.handle.net/1721.1/159107</id>
<updated>2025-10-20T03:16:24Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Scalable Embedded Tiny Machine Learning (SETML): A General&#13;
Framework for Embedded Distributed Inference
Vidal, Justice
The growth of machine learning applications has increased the necessity of lightweight, energyefficient solutions for resource-constrained devices such as the STM32C011F6 microcontroller. However, such devices struggle with supporting larger models even after miniaturization techniques such as quantization and pruning. To facilitate machine learning inference on such devices, this work introduces Scalable Embedded Tiny Machine Learning (SETML), a general framework for distributed machine learning inference on microcontrollers. Furthermore, the framework is designed to be compatible with sensor-based applications that can take advantage of small hardware, such as gesture recognition, by testing binary size constraints with an accelerometer and its supporting library. This work evaluates the latency, power consumption, and cost trade-offs of using multiple small and efficient devices versus a larger device. The STM32C011F6 microcontroller is used as the primary hardware in the tested device network, while evaluation of the system is done in comparison with a device using a similar core processing element, the Seeeeduino XIAO SAMD21.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of the energy transfer network in upconverting nanoparticles</title>
<link href="https://hdl.handle.net/1721.1/159106" rel="alternate"/>
<author>
<name>Zheng, Yuxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/159106</id>
<updated>2025-10-20T03:15:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Investigation of the energy transfer network in upconverting nanoparticles
Zheng, Yuxuan
Upconverting nanoparticles (UCNPs) have emerged as promising luminescent materials for a wide range of applications, including bioimaging, drug delivery, and photovoltaics. The intricate network of energy transfer processes within UCNPs enables their unique ability to convert low-energy infrared (IR) radiation into higher-energy visible light through photon upconversion, presenting significant challenges for accurate modeling. Despite their broad applications, theoretical models of UCNPs remain incomplete, and current models fail to accurately reproduce all experimental results. This thesis presents a comprehensive comparison of prevalent modeling approaches with the aim of developing improved models that more faithfully reproduce experimental observations. Using the Judd-Ofelt theory, we calculated essential transition rate parameters, including electric dipole (ED), magnetic dipole (MD), multiphonon relaxation (MPR), and energy transfer (ET), using constants sourced from the literature. We implemented both Monte Carlo models and Ordinary Differential Equation (ODE) models. Using the calculated rate parameters, we simulate the energy transfer pathways in Yb³⁺-Er³⁺ and Yb³⁺-Tm³⁺ UCNPs. Simulation results from all models were compared with experimental data to evaluate their effectiveness in capturing key luminescent properties such as population evolution, lifetime, saturation curves, and spectral purity.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Blood-Based Laboratory Diagnostics for Alzheimers’s Disease: A Systems Approach</title>
<link href="https://hdl.handle.net/1721.1/159103" rel="alternate"/>
<author>
<name>Peralta Walker, Stephanie Christine</name>
</author>
<id>https://hdl.handle.net/1721.1/159103</id>
<updated>2025-10-12T03:17:23Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Assessing Blood-Based Laboratory Diagnostics for Alzheimers’s Disease: A Systems Approach
Peralta Walker, Stephanie Christine
This thesis adopts a systems approach to analyze the complex network of stakeholders involved in adopting blood-based laboratory screening tests for Alzheimer’s disease (AD). Traditional diagnostic methods, including cerebrospinal fluid (CSF) testing and positron electron tomography (PET) brain imaging, are invasive, costly, and inaccessible to many. Blood-based tests offer a less invasive and more cost-effective alternative, yet they remain underutilized in clinical practice. By conducting a literature review, stakeholder interviews, and a Kano analysis, the thesis identifies and evaluates the key stakeholder needs to support the widespread adoption of these tests, such as the need for demonstrated clinical performance of these tests, reimbursement, broader education of patients and health care professionals, and safe, effective medicines to treat AD. The research highlights two emerging tests that have published studies demonstrating clinical validation, a key parameter of clinical performance. A stakeholder tension analysis is included with proposed tension resolutions using stakeholder saliency to guide prioritization. Addressing these stakeholder needs could facilitate broader implementation, improve early diagnosis, and support emerging therapeutic interventions for AD, thus reshaping the diagnostic landscape for this increasingly prevalent disease.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Coast Guard Infrastructure Management: A Multi-Criteria Framework for Prioritizing Maintenance Projects</title>
<link href="https://hdl.handle.net/1721.1/159102" rel="alternate"/>
<author>
<name>Ballard, Zachary N.</name>
</author>
<id>https://hdl.handle.net/1721.1/159102</id>
<updated>2025-10-12T03:17:20Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enhancing Coast Guard Infrastructure Management: A Multi-Criteria Framework for Prioritizing Maintenance Projects
Ballard, Zachary N.
The United States Coast Guard is currently transforming its decision-making process for prioritizing shore infrastructure maintenance and repair projects. Current decision-making subjectivity appears to be generating inadequate project prioritizations. Stakes are high for an aging infrastructure portfolio in harsh coastal conditions, with increased national reliance on the Coast Guard in a fiscally constrained budgetary environment. Data availability, quality, and fidelity continue to increase, supporting the rationale for more robust and data-informed decision-making frameworks. &#13;
&#13;
The research begins with examining Coastal and Shore Operations (CSO) funding history, along with a thorough description of the current Centralized Planned Obligation Prioritization (C-POP) process. The complex, sociotechnical nature of the problem is highlighted by identifying all involved stakeholders and categorizing them through the leading view of stakeholder theory and salience. A detailed review of the governing asset management literature is conducted, gradually narrowing from a broad, international, and asset-type neutral perspective to more tailored infrastructure cross-asset prioritization material. Requisite framework data substance, collection, and analyses are described, and recommendations for data processing improvements are made. &#13;
&#13;
Two leading prioritization models are examined: the Importance and Urgency Quadrant Model and the Value Focused Multi-Criteria Decision Model. Their respective data visualizations are generated and analyzed. Using the multi-criteria analysis rooted in multi-attribute utility theory, four portfolios of measurably increasing value are constructed, compared with a baseline portfolio reflecting actual project selections in December 2023. These portfolio iterations include a linear programming solution to the Knapsack Problem of selecting projects that maximize overall portfolio utility within a budget limit while incorporating some of the more social and qualitative system properties. &#13;
&#13;
A traceable, adaptable, defendable, and objective data-informed multi-criteria framework is proposed, which aims to facilitate the effectiveness of the overall Coast Guard shore infrastructure portfolio in the long term.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems-Theoretic Approach to Organizational Design and Analysis</title>
<link href="https://hdl.handle.net/1721.1/159101" rel="alternate"/>
<author>
<name>Gutierrez, Lauren E.</name>
</author>
<id>https://hdl.handle.net/1721.1/159101</id>
<updated>2025-10-12T03:17:18Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Systems-Theoretic Approach to Organizational Design and Analysis
Gutierrez, Lauren E.
A significant challenge for large organizations lies in organizational design, particularly for public sector bureaucracies and the largest of industry’s private firms. Organizations tend to turn to organizational design improvements when facing effectiveness and efficiency issues. Unfortunately, these large organizations struggle with organizational design because of the sheer size and complexity of their organization which results in a fragmented and often times faulty approach to improving their organization. Organizations, at their core, are a special type of system or a set of components that operate or work together to achieve some common purpose. Organizations are purely social systems in that their elements are not technical or engineered. &#13;
&#13;
Systems Theory provides a lens through which these types of social systems can be studied. Just like in engineered systems, an organization's emergent behavior is determined by its internal elements' complex interactions. Traditional organizational design and analysis methods focus on optimizing these internal elements in the hopes of re-integrating optimized elements in pursuit of organizational-level optimal behavior. Just like in traditional systems engineering, component-level optimization does not yield system-level optimal behavior. &#13;
&#13;
This thesis codifies a systems-theoretic approach to organizational design and analysis using the language of Systems Theory and the semantics of Systems-Theoretic Accident Model and Processes. By extending traditional Systems-Theoretic Process Analysis (STPA), a tool for hazard analysis used primarily for engineered systems, this work refines STPA’s concepts and terminology to be more accessible for analyzing social systems. Building off this extension, this thesis leverages a contemporary Department of Defense reorganization effort as a case study, illustrating Systems-Theoretic Organizational Design and Analysis (STAODA) as a tool to assess organizational design options.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Deep Learning Systems for Visual Perception on&#13;
the Edge</title>
<link href="https://hdl.handle.net/1721.1/159099" rel="alternate"/>
<author>
<name>Yang, Shang</name>
</author>
<id>https://hdl.handle.net/1721.1/159099</id>
<updated>2025-10-12T03:17:06Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Efficient Deep Learning Systems for Visual Perception on&#13;
the Edge
Yang, Shang
Deep learning for visual perception on edge devices has become increasingly critical, driven by emerging applications in autonomous driving and AR/VR. Typically, sparse convolution on 3D point clouds and Visual Language Models (VLMs) for image processing are two important methods for visual understanding and reasoning. However, the limited compute resources and memory on edge devices pose significant challenges, necessitating specialized system support for deep learning models. Specifically, the efficiency challenges for edge visual perception are twofold: First, the sparsity and inherent irregularity of point cloud data introduce substantial complexity for parallel processing. Second, the colossal model sizes and amount of computation of LLMs and VLMs render edge deployment particularly challenging. In this thesis, we aim to address the efficiency issues of on-device deep learning via system-algorithm co-design. We first introduce TorchSparse++, a high-performance inference engine for sparse convolution on GPUs. Unlike existing sparse convolution systems, TorchSparse++ well balances the efficiency and implementation simplicity, achieving the best performance across different application scenarios. Specifically, we first create a highly efficient Sparse Kernel Generator that generates performant sparse convolution kernels at less than one-tenth of the engineering cost of the current state-of-the-art system. On top of this, we design the Sparse Autotuner, which extends the design space of existing sparse convolution libraries and searches for the best dataflow configurations for training and inference workloads. Consequently, TorchSparse++ achieves 2.9×, 3.3×, 2.2× and 1.7× measured end-to-end speedup on an NVIDIA A100 GPU over state-of-the-art MinkowskiEngine, SpConv 1.2, TorchSparse and SpConv v2 in inference; and is 1.2-1.3× faster than SpConv v2 in mixed precision training across seven representative autonomous driving benchmarks. It also seamlessly supports graph convolutions, achieving 2.6-7.6× faster inference speed compared with state-of-the-art graph deep learning libraries. Furthermore, to democratize the power of large foundation models in edge AI, we propose AWQ and TinyChat, a hardware-friendly full-stack solution for efficient on-device LLM and VLM deployment. AWQ is a novel quantization method based on the insight that not all weights in an LLM are equally important. Protecting only 1% salient weights can greatly reduce quantization error. Specifically, AWQ employs an equivalent transformation and scales up the salient weight channels to reduce the weight quantization error, during which the scale is determined by collecting the activation statistics offline. Alongside AWQ, we further introduce TinyChat, an efficient and flexible inference framework tailored for 4-bit on-device LLM/VLMs. With on-the-fly dequantization, extensive kernel fusion and platform-aware weight packing, TinyChat offers 2.7-3.7× speedup over the Huggingface FP16 implementation on both desktop and mobile GPUs. It also enables the deployment of the 70B Llama-2 model on mobile GPUs. Together, these techniques significantly reduce the computational and memory costs for deploying deep learning models on edge devices, increasing the accessibility of deep learning for practical application. We hope that this thesis can inspire future research on efficient edge AI across diverse modalities.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diagnosing Supply Chain Threats to Defense Innovation</title>
<link href="https://hdl.handle.net/1721.1/159098" rel="alternate"/>
<author>
<name>Schneider, Donald E.</name>
</author>
<id>https://hdl.handle.net/1721.1/159098</id>
<updated>2025-10-12T03:16:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Diagnosing Supply Chain Threats to Defense Innovation
Schneider, Donald E.
As the U.S. Department of Defense (DoD) shifts focus to an era of global power competition, the demand for rapid innovation and disruptive technologies has grown significantly. Prototyping remains a vital tool for advancing technological innovation, enabling early learning and risk reduction in developing complex systems. However, persistent supply chain challenges threaten the success of defense prototyping projects, causing schedule delays, and diminished effectiveness. &#13;
This research identifies the underlying causes of supply chain disruptions specific to Federal Acquisition Regulations (FAR) governed prototyping efforts, offering a socio-technical systems analysis that accounts for stakeholder relationships, market dynamics, and regulatory frameworks. Through extensive data collection, including stakeholder interviews across agencies, organizations, and supply chain roles, 181 issues were identified and analyzed, revealing over 500 contributing factors. The disciplined analysis of these factors identified three systemic root causes: (1) the misapplication of production management strategies that focus on efficiencies at scale and low tolerance for risk; (2) pooled supply chain management functions, which marginalizes prototyping’s unique demands and creates inefficiencies; and (3) regulatory and organizational barriers to entry that deter non-traditional suppliers, hindering innovation.&#13;
To address these systemic challenges, the thesis recommends restructuring organizations to better align with the unique demands and risks of prototyping while simultaneously creating pathways to reduce barriers for new suppliers. Resolving these issues will require a coordinated effort across the prototyping ecosystem. By addressing these root causes, the DoD can improve the efficiency and effectiveness of prototyping programs, ultimately sustaining U.S. technological superiority in an increasingly competitive global environment.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A framework for determining remote sensing capabilities for ecosystem services valuation</title>
<link href="https://hdl.handle.net/1721.1/159097" rel="alternate"/>
<author>
<name>Sampath, Aparajithan</name>
</author>
<id>https://hdl.handle.net/1721.1/159097</id>
<updated>2025-10-12T03:16:46Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A framework for determining remote sensing capabilities for ecosystem services valuation
Sampath, Aparajithan
Nature provides vital services—clean water, air purification, and climate regulation—to human societies thanks to the "natural capital" like forests and lakes on our planet. Accurately measuring and valuing these ecosystem services is crucial for informed economic and development decisions. Remote sensing (RS) technology offers a powerful way to monitor natural capital (e.g., mapping forest cover, assessing water quality). However, current data lack the accuracy and precision needed for robustly monitoring the value of these services. This deficiency has impeded the use of natural capital assessment data in economic decision-making. This research partly addresses this challenge by developing a new framework to investigate the necessary sensor characteristics (spectral, radiometric, temporal, spatial) for effectively monitoring natural capital and quantifying ecosystem services. The framework first identifies the different types of services provided by an ecosystem, then uses a physics-based approach to identify crucial physical parameters and determines the necessary measurements that need to be made from a sensor for their quantification. The sources of uncertainty impacting quantification and value estimation are also analyzed in detail. The approach is integrated to formulate a system utility function that is used to compare performance of existing and proposed RS systems, and the overall results are subsequently used in proposing required capabilities for future remote sensing systems for natural capital monitoring. The framework is demonstrated on a case study focused on the flood attenuation function (service) provided by wetlands. Water budget models are utilized to identify essential parameters for monitoring water storage by wetlands. Using a study area encompassing the Fall Lake Creek reservoir (Oregon, USA), water storage capacity is measured and monitored by integrating USGS digital elevation models with Sentinel-1 synthetic aperture radar, Sentinel-2 optical data, and Planet Scope optical data. Results are validated against USGS published ground truth measurements. A strong correlation (r² of 0.95) was observed with all three datasets. An uncertainty analysis was conducted, using the random fields method, in which synthetic spatially autocorrelated errors were added to the RS datasets. Radiometric uncertainties were studied through addition of gaussian noise as a percentage of reflectance values, and results showed effects of &lt; 2.5% on estimated water volume. Elevation data uncertainties (which were approximated to simulate uncertainties in globally available DEMs) showed higher effects, and errors in estimated storage volumes increased proportionally. A study of inundation (for a case study over Miami, FL) revealed that as the root mean square error of the DEMs increased from 2m to 7 m, the risk of flooding (defined as water depth accumulation of greater than 90 cm) increased more than 3 times. A utility function was developed to evaluate sensors based on their ability to estimate wetland water volumes. This function considers sensor characteristics like spatial, radiometric, and temporal resolution. Notably, the function estimates that a future optical system with 2x improved spatial and 4x improved temporal resolution (compared to Sentinel-2) can increase utility 7-fold.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Uncanny Valley: An Empirical Study on Human Perceptions of AI-Generated Text and Images</title>
<link href="https://hdl.handle.net/1721.1/159096" rel="alternate"/>
<author>
<name>Kishnani, Deepali</name>
</author>
<id>https://hdl.handle.net/1721.1/159096</id>
<updated>2025-10-12T03:16:34Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Uncanny Valley: An Empirical Study on Human Perceptions of AI-Generated Text and Images
Kishnani, Deepali
This thesis explores how the uncanny valley phenomenon—historically tied to near-human robots—applies to text-based AI interactions and AI-generated images. While the concept has been predominantly studied in the context of robotics, the advent of generative AI reveals that text and visuals that are 'almost, but not quite' human can also provoke unease. &#13;
&#13;
Two experiments structure the study. The first examines GPT4-Turbo (GPT4o) text conversations. Sixty participants engaged with one of three “chatbots”: an “Uncanny-Valley Bot” (prompt engineered to fall in the uncanny valley), a “Human-Like Bot” (prompt engineered to converse like humans), or a human control. Godspeed Questionnaire results indicate that the uncanny valley effect surfaces in text-only form: participants consistently rated the “Uncanny-Valley Bot” lowest in anthropomorphism, animacy, likeability, and perceived intelligence. Furthermore, the experiment revealed that the distinction between GPT and humans is becoming increasingly blurred, with 60% of participants mistaking a human for GPT and 40% mistaking GPT for a human. Lastly, results highlighted a strong user preference for naturalness, human imperfections, and vulnerability. While human flaws enhance relatability, deviations that disrupt perceived humanity trigger the uncanny valley.&#13;
&#13;
The second experiment investigates AI-generated images produced by Stable Diffusion XL at varying degrees of realism. Fifty-six participants ranked each image’s “strangeness,” revealing that highly realistic or clearly stylized outputs raise fewer concerns. By contrast, images that inhabit the uncanny valley elicited discomfort. To quantify these findings, recognized metrics like Frechet Inception Distance (FID) and Kernel Inception Distance (KID) were used to compare real and AI-generated images. Both metrics strongly correlated with human perceptions, suggesting that distance metrics can be used to determine realism. The study also shows that image generation models can detect visual features associated with the uncanny valley. However, performance drops when the prompt calls for subtle, “mid-range” realism, indicating the model’s difficulty in maintaining comfort and believability at intermediate levels.&#13;
&#13;
Collectively, the two experiments confirm that uncanny valley responses are not confined to physical robots but persist in text-based dialogue and AI-synthesized images. Yet challenges remain. Short interaction windows, small participant samples, and reliance on selected AI models call for studies on the generalizability of these findings. Future work should adopt longitudinal designs, larger samples, and multiple AI systems. Addressing the uncanny valley in both textual and visual content is essential for advancing user trust, and comfort in AI.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery Pack Design and Transient Performance Modeling&#13;
for High-Power Legged Robots</title>
<link href="https://hdl.handle.net/1721.1/159094" rel="alternate"/>
<author>
<name>Evagora, Christopher K.</name>
</author>
<id>https://hdl.handle.net/1721.1/159094</id>
<updated>2025-10-12T03:16:09Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Battery Pack Design and Transient Performance Modeling&#13;
for High-Power Legged Robots
Evagora, Christopher K.
Legged robotics has recently shifted toward advanced optimization-based control methods, such as Model Predictive Control (MPC), to generate agile and energy-efficient locomotion. By casting the control problem as an optimization task, robotic systems can account for complex robot dynamics and operational constraints, including joint limits and actuator capabilities. However, high-performance maneuvers also demand rigorous consideration of onboard battery constraints. This work presents an empirically derived lithium-ion battery model that captures transient voltage sag and time-dependent internal battery state, enabling more accurate prediction of feasible power delivery. Additionally, a custom high-power battery pack was designed to meet the power demands of the MIT Humanoid, emphasizing power density, safety, and maintainability. Although the work presented in this thesis does not integrate the battery model into a trajectory optimization framework, it establishes the foundation for future research that aims to couple battery and robot dynamics in robot control. Ultimately, this approach will facilitate safer and more capable legged robots by ensuring that planned trajectories respect both physical and electrochemical constraints.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multimodal Graphical User Interface for 3D Model&#13;
Fabrication Through Generative AI</title>
<link href="https://hdl.handle.net/1721.1/159092" rel="alternate"/>
<author>
<name>Báez Alicea, Isabel</name>
</author>
<id>https://hdl.handle.net/1721.1/159092</id>
<updated>2025-05-20T12:37:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Multimodal Graphical User Interface for 3D Model&#13;
Fabrication Through Generative AI
Báez Alicea, Isabel
In recent years, three-dimensional model generation and manipulation through generative AI has seen significant developments. Current projects enable the generation of threedimensional assets from natural language prompts and input images, as well as functionalityaware model manipulation through mesh segmentation and categorization. However, all these workflows lack a coherent, unified platform that caters to users’ needs and each method’s technologies. Programs that rely on terminal-based commands lack the graphics needed for model interactions, and plugin extensions for 3D modeling applications are unintuitive and hard to extend for new functionalities. Additionally, both approaches require users to have prior computer engineering and/or 3D graphics knowledge. For this thesis, I propose the creation of a web-based, multimodal graphical user interface that consolidates all these different technologies in a single platform. By supporting model stylization and model generation (both from text prompts and input images), users can utilize combined workflows and expand the range of output possibilities for 3D asset creation. Other features in our interface include model uploading, saving, and downloading to enable a continuous stream of work on a single 3D asset. Apart from all this, we expand the current capabilities of existing image-to-3D generation programs by enabling users to combine up to six images together and create a merged 3D object. Each of these images corresponds to a view angle from which the outputted mesh will be built.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Convergence of the Arnoldi Iteration for Estimating Extreme Eigenvalues</title>
<link href="https://hdl.handle.net/1721.1/159091" rel="alternate"/>
<author>
<name>Chen, Cecilia</name>
</author>
<id>https://hdl.handle.net/1721.1/159091</id>
<updated>2025-05-20T12:37:46Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Convergence of the Arnoldi Iteration for Estimating Extreme Eigenvalues
Chen, Cecilia
Krylov subspace methods, like the Arnoldi iteration, are a powerful tool for efficiently solving high-dimensional linear algebra problems. In this work, we analyze the convergence of Krylov methods for estimating the numerical range of a matrix. Prior bounds on approximation error often depend on eigenvalue gaps of the matrix, which lead to weaker bounds than observed in practice, specifically in applications where these gaps are small. Instead, we extend a line of work proving gap-independent bounds for the Lanczos method, which depend only on the matrix dimensions and number of iterations, to the more general Arnoldi case.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GIM: Guidance as Initialization Method</title>
<link href="https://hdl.handle.net/1721.1/159090" rel="alternate"/>
<author>
<name>Duitama Cortes, Juan Sebastian</name>
</author>
<id>https://hdl.handle.net/1721.1/159090</id>
<updated>2025-10-04T03:17:47Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">GIM: Guidance as Initialization Method
Duitama Cortes, Juan Sebastian
This work makes two contributions: the evaluation of early stop guidance for deep Fully Connected Networks (FCNs) and the introduction of guidance as an initialization method (GIM). Network initialization has been a meaningful and challenging topic in the field of machine learning (ML) for a long time. Many initialization methods exist, ranging from data-independent to data-dependent approaches. Initializations allow for a better understanding of model behavior and improvements in model performance. The novel guidance tool enabled us to propose GIM, a new technique that initializes a model by leveraging representational similarity with respect to models of different architectures. A model with an architecture that performs poorly in a specific task can be initialized with guidance from a model with an architecture that performs well in the respective task. We focus on the case of FCNs in the task of image classification and provide experimental results to validate our approach.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulating Weather For A Mixed Reality Platform</title>
<link href="https://hdl.handle.net/1721.1/159089" rel="alternate"/>
<author>
<name>Ni, Hao</name>
</author>
<id>https://hdl.handle.net/1721.1/159089</id>
<updated>2025-10-04T03:17:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Simulating Weather For A Mixed Reality Platform
Ni, Hao
Complex systems are inherently difficult to teach in a traditional classroom setting. The We’re In This Together (WIT) project aims to provide a different teaching strategy by using AR/VR headsets to situate the students directly inside the system. WIT’s first game attempts to tackle common weather concepts including precipitation and fronts; however, the most recent version fails to demonstrate and model the concepts in an accurate and comprehensible way. This project focuses on developing a brand-new simulation layer for the game that better captures the causes behind common weather phenomena. The new simulation uses a particle-based approach to model the movement of air in the atmosphere and creates a more thorough and interactive experience to help students explore the various aspects of weather.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Realistic Tactile Stylization for Digital Fabrication using Enhanced UV Unwrapping Method</title>
<link href="https://hdl.handle.net/1721.1/159088" rel="alternate"/>
<author>
<name>Wong, Zoe</name>
</author>
<id>https://hdl.handle.net/1721.1/159088</id>
<updated>2025-10-04T03:16:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Realistic Tactile Stylization for Digital Fabrication using Enhanced UV Unwrapping Method
Wong, Zoe
While recent advances in Generative AI enable visual stylization of 3D models using image prompts, they typically neglect tactile properties. TactStyle addresses this limitation by enabling creators to enhance 3D models with both visual and tactile properties derived from texture images. Using a fine-tuned image-generation model, TactStyle generates highly accurate heightfields that faithfully replicate the tactile properties of input visual textures and applies them to 3D models. However, applying textures to 3D models presents challenges, such as ensuring even texture resolution, avoiding texture warping, and minimizing visible seams. TactStyle’s current implementation often struggles with significant texture stretching and distortion caused by poor UV mapping, compromising the accurate heightfields and diminishing the tactile fidelity of printed models. Our research systematically evaluates various UV unwrapping methods, including alternative UV projections and an optimization-based neural UV mapping, to improve the realism and accuracy of texture application on 3D models in digital fabrication. Building on these findings, we will release a Blender plugin that integrates the optimal UV unwrapping methods with TactStyle, enabling creators to easily customize their 3D models with accurate tactile properties using only reference texture images. This work enhances the practicality and accessibility of tactile 3D model customization, bridging the gap between visual and tactile design elements.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identifying the Role of Transcription Factor RFX3 in 9PDeletion Syndrome</title>
<link href="https://hdl.handle.net/1721.1/159087" rel="alternate"/>
<author>
<name>Edwards, Lilly</name>
</author>
<id>https://hdl.handle.net/1721.1/159087</id>
<updated>2025-10-04T03:16:39Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Identifying the Role of Transcription Factor RFX3 in 9PDeletion Syndrome
Edwards, Lilly
9p deletion (9p-) syndrome is primarily characterized by intellectual disability, developmental delays, and autism. This project investigated how much of the neuronal phenotypes of 9p- syndrome could be attributed to RFX3, a transcription factor and autism risk gene. Bulk RNA-seq data of iPSC-derived neurons from patients with 9p- syndrome and CRISPRengineered cell lines was analyzed using Principal Component Analysis, Differential Gene Expression analysis, and Functional Enrichment analysis. The findings indicate that RFX3 plays a significant role but is not the sole driver of the neuronal phenotypes. SMARCA2, a gene linked to intellectual disability and part of the SWI/SNF complex, was identified as a direct target of RFX3 in the commonly deleted region of chromosome 9p. Notably, the combined deletion of RFX3 and SMARCA2 led to greater dysregulation of SMARCA2 expression and SWI/SNF complex components than the deletion of either gene alone. These findings highlight the potential synergistic effects of RFX3 and SMARCA2 in 9p- syndrome and suggest their combined disruption may underlie the neuronal phenotypes observed.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Model Editing for Unlearning in Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/159086" rel="alternate"/>
<author>
<name>Hossain, Shariqah</name>
</author>
<id>https://hdl.handle.net/1721.1/159086</id>
<updated>2025-10-04T03:16:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Investigating Model Editing for Unlearning in Large Language Models
Hossain, Shariqah
Data regulations on the Right to be Forgotten such as that in the General Data Protection Regulation (GDPR) of the European Union protect the right of users to remove private information from organizations. With the increasing usage and influence of large language models (LLMs) that are trained on personal data, a question of how to implement the removal of information within these models arises. In addition, large language models (LLMs) are trained on a large corpus of data that is usually scraped from the Web. A current challenge with ensuring reliable and safe outputs from LLMs is false, toxic, harmful or biased information from Web data that is captured in the knowledge of the model. Machine unlearning aims to remove unwanted information from a model, but many methods are inefficient for models with large numbers of parameters or fail to remove the entire scope of information without harming performance in the knowledge that is to be retained. Model editing algorithms solve a similar problem of changing information in LLMs, but they focus on redirecting inputs to a new target rather than removing that information altogether. Despite the parallels between model editing and unlearning, there has yet to be a thorough investigation of the potential of model editing approaches within this setting. In this work, we explore ROME, IKE, and WISE editing algorithms and design new editing targets for an unlearning setting. For evaluating the potential of the model editing algorithms, we focus on unlearning fictitious information using the Task of Fictitious Unlearning (TOFU) benchmark. Through this investigation, we show that model editing approaches can exceed the performance of current unlearning methods at removing information depending on the setting. They share the limitation of traditional unlearning of being unable to encapsulate the scope of what is to be unlearned without damage to overall model performance. We hope to leverage this information to improve methods for unlearning model knowledge and therefore improve the reliability of LLMs.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward An Explainable Electric Power Grid Operation Assistant Using Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/159085" rel="alternate"/>
<author>
<name>Ravichandran, Anish</name>
</author>
<id>https://hdl.handle.net/1721.1/159085</id>
<updated>2025-10-04T03:16:23Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Toward An Explainable Electric Power Grid Operation Assistant Using Large Language Models
Ravichandran, Anish
This thesis explores potential applications of LLMs for assisting the analyses and decisionmaking of complex electric power grid operators. The power grid is a critical piece of infrastructure currently challenged by increased electrification, integration of renewable energy sources, and distributed energy resources (DERs). Human operators struggle to process the massive amounts of data produced by modern smart grids and need innovative solutions to handle the increased complexity of operational decisions. This thesis investigates the potential role of Large Language Models (LLMs) in grid operation tasks, focusing on interpretability and generalizability while exploring how LLMs can assist operators by providing actionable insights and recommendations. Multiple versions of LLM agents were developed, including naive and tool-assisted designs, and were evaluated on the Learn to Run a Power Network (L2RPN) benchmark for steady-state and cascading failure scenarios. While the LLM agents performed better in scenarios requiring exploratory decision-making, they struggled in steady-state operation and were constrained by their integration with tools and the testing environment. This work was limited by compute constraints, which affected the choice of model and the length of evaluation scenarios, and future work is needed toward seamless interaction of LLMs and power systems simulators, however LLMs have the potential to transform future grid operation, paving the way for more resilient and sustainable energy sector of the 21st century.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs</title>
<link href="https://hdl.handle.net/1721.1/159084" rel="alternate"/>
<author>
<name>Skelić, Lejla</name>
</author>
<id>https://hdl.handle.net/1721.1/159084</id>
<updated>2025-10-04T03:16:16Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">CIRCUIT: A Benchmark for Circuit Interpretation and Reasoning Capabilities of LLMs
Skelić, Lejla
The role of Large Language Models (LLMs) has not been extensively explored in analog circuit design, which could benefit from a reasoning-based approach that transcends traditional optimization techniques. In particular, despite their growing relevance, there are no benchmarks to assess LLMs’ reasoning capability about circuits. Therefore, we created the CIRCUIT dataset consisting of 510 question-answer pairs spanning various levels of analog-circuit-related subjects. The best-performing model on our dataset, GPT-4o, achieves 48.04% accuracy when evaluated on the final numerical answer. To evaluate the robustness of LLMs on our dataset, we introduced a unique dataset design and evaluation metric that enable unit-test-like evaluation by grouping questions into unit tests. In this case, GPT-4o can only pass 27.45% of the unit tests, highlighting that the most advanced LLMs still struggle with understanding circuits, which requires multi-level reasoning, particularly when involving circuit topologies. This circuit-specific benchmark introduces a scalable and reliable automatic evaluation method, transferable to other reasoning domains, and highlights LLMs' limitations, offering valuable insights for advancing their application in analog integrated circuit design.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ALFA-Chains: An Artificial Intelligence Approach to Exploit Chain Discovery in Networks</title>
<link href="https://hdl.handle.net/1721.1/159083" rel="alternate"/>
<author>
<name>Tulla Lizardi, Miguel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/159083</id>
<updated>2025-10-04T03:15:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">ALFA-Chains: An Artificial Intelligence Approach to Exploit Chain Discovery in Networks
Tulla Lizardi, Miguel A.
Exploit chains play a crucial role in advanced persistent threats (APTs) and other malicious cyber campaigns. Sophisticated attackers can navigate across a network, escalate their privileges, and compromise valuable targets by executing the right exploits in the right order. However, finding these exploits chains is a challenging task requiring a broad knowledge of the vulnerabilities present in computer systems and the exploits that take advantage of them. Networks can be complex, with many hosts and intricate software stacks. Moreover, the range of known exploits and vulnerabilities is constantly growing, complicating the process of determining how they can be linked. This thesis introduces a solution, ALFA-Chains, that automates the discovery of exploit chains by leveraging classical AI planning, Large Language Models (LLMs), and existing exploit/vulnerability databases. ALFA-Chains describes networks and exploits using the Planning Domain Description Language (PDDL), a formal language to represent planning problems. This allows us to use optimized off-the-shelf planners that have been developed by the AI planning community over many years. Our system takes natural language descriptions of exploits and classifies them into categories based on their preconditions and effects. From this intermediary representation, we can programmatically generate PDDL that captures the requirements needed to run the exploit and the access gained by the attacker. Due to this automated approach, ALFA-Chains is able to consider a vast set of exploits when determining if a network is susceptible to exploit chaining. We show how ALFA-Chains can process 1,880 Metasploit exploits and their corresponding 2,002 CVEs to detect exploit chains in a variety of realistic network configurations. We proceed to discuss potential applications of ALFA-Chains, including automated penetration testing and vulnerability prioritization.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Inductive Biases of Conditional Diffusion Models</title>
<link href="https://hdl.handle.net/1721.1/159081" rel="alternate"/>
<author>
<name>Yu, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/159081</id>
<updated>2025-09-03T03:35:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">On the Inductive Biases of Conditional Diffusion Models
Yu, Christina
Diffusion models have achieved remarkable progress in recent years across various domains and applications, but how diffusion models generalize is still not well understood. While prior work predominantly focuses on unconditional diffusion models, in this thesis we focus on understanding generalization for conditional diffusion models, which is especially relevant for modern text- or observation- conditioned applications. In particular, we are interested in the inductive biases of conditional diffusion models which predispose them to certain forms of interpolation in regions outside the support of the training data. We observe that neural networks are capable of learning qualitatively different forms of interpolation, which may be influenced by the architecture and capacity of the network and other aspects of neural network training. We develop a potential framework to model the interpolation behavior of neural networks via nonparametric estimation, which happens to have the property of being schedule consistent, or truly denoising at every time step. We find that, assuming a neural network with sufficient capacity, conditional diffusion models are biased towards smoothing, which can lead to non-schedule consistent behavior away from the training data and has a number of interesting consequences.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>All Pass Readout With Ring Resonators for Qubit Measurement</title>
<link href="https://hdl.handle.net/1721.1/159080" rel="alternate"/>
<author>
<name>Zang, Alicia</name>
</author>
<id>https://hdl.handle.net/1721.1/159080</id>
<updated>2025-09-03T03:35:57Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">All Pass Readout With Ring Resonators for Qubit Measurement
Zang, Alicia
Quantum computers may advance computing by solving some NP complexity problems, such as factoring and simulating quantum systems. Superconducting qubits, configurable artificial atoms comprised of circuit elements, are a leading platform to create quantum computers. Many schemes for superconducting qubit readout include a weakly coupled port as a capacitor in the feedline, which allows for directionality in the readout signal. However, this impedance mismatch creates problems with resonator linewidth variation, standing waves, and voltage nodes in the feedline, leading to challenges in scaling to larger frequency multiplexed systems. This thesis proposes an all-pass readout scheme that utilizes ring resonators that do not require a weakly coupled port, allowing for more modular qubit readout architectures.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verification of Go Channels</title>
<link href="https://hdl.handle.net/1721.1/159079" rel="alternate"/>
<author>
<name>Zhang, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/159079</id>
<updated>2025-09-03T03:35:46Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Verification of Go Channels
Zhang, Jessica
Goose is a tool for translating a subset of the Go programming language into Perennial/Iris, which is an extension of Coq. However, Goose did not support channels, which are an important synchronization tool that Go is well known for.&#13;
&#13;
This thesis presents an extension to Goose to support channels, including a model to represent Go channels and operations in GooseLang, the language defined in Perennial/Iris that Goose translates into, an extension to the Goose translator to support channels, and a library of separation logic specifications that define the expected behavior of channel operations on open channels. Finally, this thesis evaluates how effective this model and library is for verifying Go code containing channels, and discuss some limitations and potential future work.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory testing and design of a mill for the treatment of a gold ore from Porcupine, Ontario</title>
<link href="https://hdl.handle.net/1721.1/159006" rel="alternate"/>
<author>
<name>Loo, Pang Chieh.</name>
</author>
<id>https://hdl.handle.net/1721.1/159006</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1917-01-01T00:00:00Z</published>
<summary type="text">Laboratory testing and design of a mill for the treatment of a gold ore from Porcupine, Ontario
Loo, Pang Chieh.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mining Engineering and Metallurgy, 1917
</summary>
<dc:date>1917-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of industry financing of a new jet transport for U.S. domestic airline service</title>
<link href="https://hdl.handle.net/1721.1/159001" rel="alternate"/>
<author>
<name>Evani, Sunder Rayma Murthy.</name>
</author>
<id>https://hdl.handle.net/1721.1/159001</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A study of industry financing of a new jet transport for U.S. domestic airline service
Evani, Sunder Rayma Murthy.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Review of intervention programs for pre-schoolers in Venezuela.</title>
<link href="https://hdl.handle.net/1721.1/159000" rel="alternate"/>
<author>
<name>Eskenasy, Sandra Patricia.</name>
</author>
<id>https://hdl.handle.net/1721.1/159000</id>
<updated>2025-12-17T03:47:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Review of intervention programs for pre-schoolers in Venezuela.
Eskenasy, Sandra Patricia.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1978; Bibliography: leaf 147.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Multi-query Planning in Graphs of Convex Sets</title>
<link href="https://hdl.handle.net/1721.1/158967" rel="alternate"/>
<author>
<name>Morozov, Savva</name>
</author>
<id>https://hdl.handle.net/1721.1/158967</id>
<updated>2025-04-07T09:05:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Fast Multi-query Planning in Graphs of Convex Sets
Morozov, Savva
Planning in Graphs of Convex Sets (GCS) is a recently developed optimization framework that seamlessly integrates discrete and continuous decision making. It naturally models and effectively solves a wide range of challenging planning problems in robotics, including collision-free motion planning, skill chaining, and control of hybrid systems. In this thesis, we study the multi-query extension of planning through GCS, motivated by scenarios where robots must operate swiftly within static environments. Our objective is to precompute optimal plans between predefined sets of source and target conditions, in an effort to enable fast online planning and reduce GCS solve times. Our solution consists of two stages. Offline, we use semidefinite programming to compute a coarse lower bound on the problem’s cost-to-go function. Then, online, this lower bound is used to incrementally generate feasible plans by solving short-horizon convex programs. We demonstrate the effectiveness of our approach through a variety of experimental domains: collision-free motion planning for a warehouse robot arm, item sorting for a top-down suction gripper, and footstep planning for a bipedal walker. In particular, in a warehouse-like scenario involving a seven-joint robot arm, our method generates higher-quality paths up to 100 times faster than existing motion planners.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structuring Representation Geometry in Self-Supervised Learning</title>
<link href="https://hdl.handle.net/1721.1/158966" rel="alternate"/>
<author>
<name>Gupta, Sharut</name>
</author>
<id>https://hdl.handle.net/1721.1/158966</id>
<updated>2025-04-07T09:25:49Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Structuring Representation Geometry in Self-Supervised Learning
Gupta, Sharut
The central promise of deep learning is to learn a map &#119891; : &#119987; → ℝ_&#119889; that transforms objects &#119987;—represented in their raw perceptual forms, such as images or molecular strings—into a representation space ℝ_&#119889; where everything that is hard to do with raw perceptual data becomes easy. For instance, measuring the similarity between two objects [scientific notation] expressed as tensors of pixel intensities is non-trivial in their raw form, but becomes straightforward if &#119891; maps these objects to a space where simple Euclidean distances, ‖&#119891;(&#119909;₁) − &#119891;(&#119909;₂)‖₂ are meaningful measures of similarity. While this simple recipe has shown standout success in a range of tasks, certain applications require representations that encode richer structural relationships beyond pairwise similarity. For instance, tasks that encode relational information— such as “&#119883; is a parent of &#119884; ” or “&#119860; is a treatment for &#119861;”—require embedding spaces that capture richer structural relationships. In this thesis, we explore what &#119891; should encode in order to be useful for a range of unknown downstream tasks, from the point of view of the geometric structure of representation space. We investigate this question in the context of self-supervised learning, a paradigm that extracts meaningful representations by leveraging the structure of the data itself without relying on explicit labels. Specifically, we propose adding additional geometric structure to the embedding space by enforcing transformations of input space to correspond to simple (i.e., linear) transformations in the embedding space. To this end, we introduce an equivariance objective and theoretically prove that its minima forces transformations on input space to correspond to rotations on the spherical embedding space. Our proposed method significantly improves performance on downstream tasks, and ensures sensitivity in embedding space to important variations in data (e.g., color, rotation) that existing contrastive methods do not achieve.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scalable and Modular Manufacturing of Insect-Scale Aerial Robots Towards Swarm Flight Demonstrations</title>
<link href="https://hdl.handle.net/1721.1/158965" rel="alternate"/>
<author>
<name>Hsiao, Yi-Hsuan</name>
</author>
<id>https://hdl.handle.net/1721.1/158965</id>
<updated>2025-04-08T04:50:00Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Scalable and Modular Manufacturing of Insect-Scale Aerial Robots Towards Swarm Flight Demonstrations
Hsiao, Yi-Hsuan
Insects demonstrate remarkable capabilities in navigating complex environments and executing tasks such as pollination and coordinated object transport. Inspired by these biological feats, insect-scale micro aerial vehicles (MAVs) have been developed with advanced flight functionalities, including collision resilience and aerial acrobatics. Despite these advancements, MAVs weighing less than a gram continue to face critical challenges in design, assembly, and repair. Additionally, limitations in sensing and control have prevented the realization of swarm-like behaviors, thereby constraining research on collective actions and potential applications such as distributed sensing. To overcome these obstacles, this work introduces a scalable and modular fabrication method for sub-gram MAVs. A parametric design algorithm automatically generates laser cutting templates from a minimal set of design parameters, while stereolithographic 3D printing is employed to fabricate static components such as airframes and connectors, significantly streamlining the production process. This modular approach improves assembly efficiency and repairability, reducing fabrication time by more than half. Using this methodology, two sub-gram MAVs successfully demonstrated controlled hovering and coordinated payload transport. These results represent a significant step toward enabling insect-inspired robotic swarms, providing a platform for future studies on collective flight behaviors and swarm robotics.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Goal Inference from Open-Ended Dialog</title>
<link href="https://hdl.handle.net/1721.1/158960" rel="alternate"/>
<author>
<name>Ma, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/158960</id>
<updated>2025-04-07T09:13:54Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Goal Inference from Open-Ended Dialog
Ma, Rachel
Embodied AI Agents are quickly becoming important and common tools in society. These embodied agents should be able to learn about and accomplish a wide range of user goals and preferences efficiently and robustly. Large Language Models (LLMs) are often used as they allow for opportunities for rich and open-ended dialog type interaction between the human and agent to accomplish tasks according to human preferences.&#13;
&#13;
In this thesis, we argue that for embodied agents that deal with open-ended dialog during task assistance:&#13;
&#13;
1. AI Agents should extract goals from conversations in the form of Natural Language (NL) to be better at capturing human preferences as it is intuitive for humans to communicate their preferences on tasks to agents through natural language.&#13;
&#13;
2. AI Agents should quantify/maintain uncertainty about these goals to ensure that actions are being taken according to goals that the agent is extremely certain about.&#13;
&#13;
We present an online method for embodied agents to learn and accomplish diverse user goals. While offline methods like RLHF can represent various goals but require large datasets, our approach achieves similar flexibility with online efficiency. We extract natural language goal representations from conversations with Large Language Models (LLMs). We prompt an LLM to role play as a human with different goals and use the corresponding likelihoods to run Bayesian inference over potential goals. As a result, our method can represent uncertainty over complex goals based on unrestricted dialog. We evaluate in a text-based grocery shopping domain and an AI2Thor robot simulation. We compare our method to ablation baselines that lack either explicit goal representation or probabilistic inference.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Subject Image Generation</title>
<link href="https://hdl.handle.net/1721.1/158959" rel="alternate"/>
<author>
<name>Yin, Tianwei</name>
</author>
<id>https://hdl.handle.net/1721.1/158959</id>
<updated>2025-04-08T04:17:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Multi-Subject Image Generation
Yin, Tianwei
Diffusion models excel at text-to-image generation, especially in subject-driven generation for personalized images. However, existing methods are inefficient due to the subject-specific fine-tuning, which is computationally intensive and hampers efficient deployment. Moreover, existing methods struggle with multi-subject generation as they often blend identity among subjects. In this thesis, we present FastComposer which enables efficient, personalized, multi-subject text-to-image generation without fine-tuning. FastComposer uses subject embeddings extracted by an image encoder to augment the generic text conditioning in diffusion models, enabling personalized image generation based on subject images and textual instructions with only forward passes. To address the identity blending problem in the multi-subject generation, FastComposer proposes cross-attention localization supervision during training, enforcing the attention of reference subjects localized to the correct regions in the target images. Naively conditioning on subject embeddings results in subject overfitting. FastComposer proposes delayed subject conditioning in the denoising step to maintain both identity and editability in subject-driven image generation. FastComposer generates images of multiple unseen individuals with different styles, actions, and contexts. It achieves 300–2500 speedup compared to fine-tuning-based methods and requires zero extra storage for new subjects. FastComposer paves the way for efficient, personalized, and high-quality multi-subject image creation.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Genetic algorithm gradient ascent (GAGA) optimization&#13;
of compact symmetry-breaking photonic crystals</title>
<link href="https://hdl.handle.net/1721.1/158958" rel="alternate"/>
<author>
<name>Gold, Hannah T.</name>
</author>
<id>https://hdl.handle.net/1721.1/158958</id>
<updated>2025-04-08T04:37:02Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Genetic algorithm gradient ascent (GAGA) optimization&#13;
of compact symmetry-breaking photonic crystals
Gold, Hannah T.
Fundamental limits of thermal radiation are imposed by Kirchhoff’s law, which assumes the electromagnetic reciprocity of a material or material system. Thus, breaking reciprocity can enable breaking barriers in thermal efficiency engineering¹. This thesis presents 1D photonic crystals composed of Weyl/Dirac semimetal and dielectric layers, whose structures are optimized to maximize the nonreciprocity of infrared radiation absorptance/emittance in planar and compact designs. Two different mechanisms to enable nonreciprocal infrared absorbers/emitters are simulated and compared – anomalous Hall effect in Weyl semimetals 2 and electric-current-induced Fizeau drag in either Dirac or Weyl semimetals3 . To engineer an ultra-compact absorber structure that does not require gratings or prisms to couple light, a genetic algorithm (GA) was used to maximize nonreciprocity in the design globally, followed by the application of the numerical gradient ascent (GAGA) algorithm as a local optimization to further enhance the design. The first absorber design takes advantage of the intrinsic nonreciprocity of time-reversal symmetry (TRS) breaking Weyl semimetals due to their pseudomagnetic field in momentum space. GAGA methodology is then applied to design and optimize a flat absorber using inversion (IS) breaking Weyl/Dirac semimetals as active layers, in which tunable nonreciprocity is induced through an applied DC current bias. This momentum bias imparts plasmon Fizeau drag, the drag of an electrical current on propagating surface plasmon polaritons (SPPs). A semi-classical theory recently developed is used to model SPP transport along interfaces of 3D semimetals under Fizeau drag3 . Lastly, in both cases the optimization algorithm accounts for both s- and p-polarized absorptance spectra to create a final design suitable for thermal applications, which maximizes the nonreciprocal absorptance of p-polarized light and simultaneously minimizes the parasitic, reciprocal absorptance of s-polarized light.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailored Mechanical Response of 3D Microgranular Crystals with Hierarchical Architecture</title>
<link href="https://hdl.handle.net/1721.1/158956" rel="alternate"/>
<author>
<name>Figueroa, Samuel D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158956</id>
<updated>2025-04-08T04:36:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Tailored Mechanical Response of 3D Microgranular Crystals with Hierarchical Architecture
Figueroa, Samuel D.
Granular media exhibit extraordinary impact-mitigating properties due to their nonlinear grain-to-grain interactions, enabling efficient energy dissipation and wave perturbation under dynamic loading—behaviors unattainable in conventional monolithic materials. Recent efforts have sought to engineer granular systems with tunable mechanical responses, though few have begun to realize them as functional architected materials. Here, we introduce a two-level architected granular framework that programs spherical microgranular media across both grain-level (ellipsoidal microvoids) and bulk granular packing-level architectures, offering surprising control over static and dynamic properties. Using nanoindentation experiments, we reveal tunable quasi-static stiffness behavior, where hollow architected granular packings can exhibit superior mass-normalized energy dissipation compared to their fully dense counterparts. Finite element simulations uncover a structurally engineered Poisson effect, enabling nonlocal contact mechanisms that enhance load-bearing capacity across different packing structures. Future custom direct impact experiments demonstrate a potential route the effectiveness of our multi-scale design in dynamically programming energy dissipation. Our findings demonstrate that a hierarchical granular crystal exhibits enhanced specific energy absorption at a fraction of the weight of their fully dense counterparts and unique nonlocal stress redistribution, surpassing classical granular mechanics through architectural design. This work establishes a path toward lightweight, tunable, and impact-resistant metamaterials, with broad applications in nonlinear waveguiding, energy dissipation, and protective systems.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Variational Lower Bound to Mitigate Batch Effect in&#13;
Molecular Representations</title>
<link href="https://hdl.handle.net/1721.1/158954" rel="alternate"/>
<author>
<name>Wang, Chenyu</name>
</author>
<id>https://hdl.handle.net/1721.1/158954</id>
<updated>2025-04-08T04:13:36Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Variational Lower Bound to Mitigate Batch Effect in&#13;
Molecular Representations
Wang, Chenyu
High-throughput drug screening – using cell imaging or gene expression measurements as readouts of drug effect – is a critical tool in biotechnology to assess and understand the relationship between the chemical structure and biological activity of a drug. Since large-scale screens have to be divided into multiple experiments, a key difficulty is dealing with batch effects, which can introduce systematic errors and non-biological associations in the data. We propose InfoCORE, an Information maximization approach for COnfounder REmoval, to effectively deal with batch effects and obtain refined molecular representations. InfoCORE establishes a variational lower bound on the conditional mutual information of the latent representations given a batch identifier. Experiments on drug screening data reveal InfoCORE’s superior performance in a multitude of tasks including molecular property prediction and molecule-phenotype retrieval. Additionally, we show results for how InfoCORE offers a versatile framework and resolves general distribution shifts and issues of data fairness by minimizing correlation with spurious features or removing sensitive attributes.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Congestion Control for DNN training clusters</title>
<link href="https://hdl.handle.net/1721.1/158950" rel="alternate"/>
<author>
<name>Narang, Sanjoli</name>
</author>
<id>https://hdl.handle.net/1721.1/158950</id>
<updated>2025-04-07T08:56:14Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Congestion Control for DNN training clusters
Narang, Sanjoli
The modern DNN workloads generate network traffic having striking differences with the conventional data-center traffic. DNN training jobs generate periodic traffic pattern where all subsequent flows depend on the completion of the currently running flow. Although this periodic behavior calls for a new non-conventional congestion control protocol for DNN training clusters, it also creates an unprecedented opportunity to approximate optimal schedule for DNN jobs in a distributed manner without requiring priority queues, centralized information, or switch hardware support. Prior work on MLTCP proposed updates to existing congestion control algorithms to make them capable of minimizing network congestion when DNN jobs compete for the network. In this thesis, we propose several techniques to expand the scope of prior work to support DNN jobs with more complex communication patterns or parallelization strategies, and further improve the performance speedup over TCP. With two straightforward ideas of updating the congestion control parameters, we expand the performance benefits of MLTCP to a wider set of periodic DNN jobs. Augmenting existing congestion control algorithms with MLTCP provides an effective guiding mechanism to a random search to find the optimal interleaved schedule for competing DNN jobs. Our contributions boost this guided search to improve performance further. We provide detailed theoretical analysis and extensive flow-level simulations to take a deep dive into the convergence, performance speedup, and fairness of MLTCP with the proposed changes.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Input Adaptive Allocation of Language Model Computation</title>
<link href="https://hdl.handle.net/1721.1/158949" rel="alternate"/>
<author>
<name>Damani, Mehul</name>
</author>
<id>https://hdl.handle.net/1721.1/158949</id>
<updated>2025-04-07T09:13:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Input Adaptive Allocation of Language Model Computation
Damani, Mehul
Computationally intensive decoding procedures—including search, reranking, and self-critique— can improve the quality of language model (LM) outputs in problems spanning code generation, numerical reasoning, and dialog. Existing work typically applies the same decoding procedure for every input to an LM. But not all inputs require the same amount of computation to process. Can we allocate decoding computation adaptively, using more resources to answer questions whose answers will be harder to compute? We present an approach that predicts the distribution of rewards given an input and computation budget, then allocates additional computation to inputs for which it is predicted to be most useful. We apply this approach in two decoding procedures: first, an adaptive best-of-k procedure that dynamically selects the number of samples to generate as input to a reranker; second, a routing procedure that dynamically responds to a query using a decoding procedure that is expensive but accurate, or one that is cheaper but less capable. Across a suite of programming, mathematics, and dialog tasks, we show that accurate computation-allocation procedures can be learned, and reduce computation by up to 50% at no cost to response quality, or improve quality by up to 10% at a fixed computational budget.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Robotic Manipulation of Liquid Using a Digitally Fabricated Intelligent Wearable Device</title>
<link href="https://hdl.handle.net/1721.1/158941" rel="alternate"/>
<author>
<name>Lee, Young Joong</name>
</author>
<id>https://hdl.handle.net/1721.1/158941</id>
<updated>2025-04-07T08:57:31Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enhancing Robotic Manipulation of Liquid Using a Digitally Fabricated Intelligent Wearable Device
Lee, Young Joong
Despite recent exponential advances in computer vision and reinforcement learning, it remains challenging for robots to interact with liquids due to visual obstructions, transparent liquids, and fine-grained splashes. Yet, a substantial opportunity exists for robotics to excel in liquid identification and manipulation, given its potential role in chemical handling in laboratories and various manufacturing sectors such as pharmaceuticals or beverages. Recent advancements in electronic wearables, designed to replicate or surpass the functions and attributes of human skin, and their convergence with machine learning have provided opportunities to enhance the capabilities of robotic systems. Here, we present a novel approach for liquid class identification and position estimation with the robotic wearable device that can ‘see through’ the container, leveraging electrical impedance sensing. We design and mount a digitally embroidered electrode array to a commercial robotic gripper. Coupled with a customized impedance sensing board, we collect data on liquid manipulation with a swept frequency sensing mode and a frequency-specific impedance measuring mode. Our developed learning-based models achieve an accuracy of 93.33% in classifying 9 different types of liquids (8 liquids + air) and 97.65% in estimating the liquid position in the cup without any vision system present. We investigate the effectiveness of our system with a series of ablation studies. These findings highlight our work as a promising solution for enhancing robotic manipulation in liquid-related tasks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Tunneling Nanoelectromechanical Switches</title>
<link href="https://hdl.handle.net/1721.1/158940" rel="alternate"/>
<author>
<name>Dang, Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/158940</id>
<updated>2025-04-08T04:11:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Tunneling Nanoelectromechanical Switches
Dang, Tong
As silicon complementary metal-oxide-semiconductor (CMOS) technology nears its scaling limits, nanoelectromechanical (NEM) switch relays have emerged as promising candidates for complementing CMOS technology due to their superior characteristics, including zero leakage, steep subthreshold swings, high on-of current ratios, and robustness in harsh environments. However, the practical integration of NEM switches still faces challenges such as high actuation voltages, stiction, and slower switching speeds compared to CMOS. One promising strategy to mitigate these issues is the integration of a self-assembled monolayer (SAM) to create tunneling NEM switches. Such switches could achieve nanometer-scale mechanical modulation of gaps between electrodes, showing the potential to overcome the limitations of a conventional NEM switch by exhibiting low actuation voltages, high switching speeds, and minimizing stiction. Nevertheless, the tunneling NEM switches reported to date still show limited performance and require intricate fabrication processes. Additionally, functional tunneling NEM switches demonstrated are limited to two-terminal architectures. This thesis explores innovative designs, fabrication techniques, and material choices to address these limitations and to develop tunneling NEM switches with enhanced performance and reliability for next-generation NEM logic applications. To this end, switches with various structures have been fabricated and investigated, and their respective characteristics are analyzed. In a three-terminal lateral structure fabricated using entirely conventional nanofabrication techniques, switching is demonstrated in both contact and tunneling modes. While operation in direct contact mode shows a high on-of ratio, the integration of the SAM leads to a significantly reduced actuation voltage of 2 V and a lower hysteresis. Further, two-terminal vertical structured devices are studied in tunneling mode, and they consistently demonstrate operation cycles exceeding 100, with a maximum of over 7000, which manifests the reliability prospects of SAM. The trends in IV characteristics indicate that the SAM might have experienced physical deformation due to compression, highlighting a potential area for future research in the molecular engineering of the self-assembly monolayer.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Near-Optimal Learning and Planning in Separated Latent MDPs</title>
<link href="https://hdl.handle.net/1721.1/158934" rel="alternate"/>
<author>
<name>Chen, Fan</name>
</author>
<id>https://hdl.handle.net/1721.1/158934</id>
<updated>2025-04-08T04:08:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Near-Optimal Learning and Planning in Separated Latent MDPs
Chen, Fan
We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs). In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs. To sidestep known impossibility results, we consider several notions of δ-separation of the constituent MDPs. The main thrust of this paper is in establishing a nearly-sharp statistical threshold for the horizon length necessary for efficient learning. On the computational side, we show that under a weaker assumption of separability under the optimal policy, there is a quasi-polynomial algorithm with time complexity scaling in terms of the statistical threshold. We further show a near-matching time complexity lower bound under the exponential time hypothesis.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VoxelPrompt: A Vision-Language Agent for Grounded Medical Image Analysis</title>
<link href="https://hdl.handle.net/1721.1/158933" rel="alternate"/>
<author>
<name>Hoopes, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/158933</id>
<updated>2025-04-07T08:52:58Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">VoxelPrompt: A Vision-Language Agent for Grounded Medical Image Analysis
Hoopes, Andrew
We present VoxelPrompt, an agent-driven vision-language framework that tackles diverse radiological tasks through joint modeling of natural language, image volumes, and analytical metrics. VoxelPrompt is multi-modal and versatile, leveraging the flexibility of language interaction while providing quantitatively-grounded image analysis. Given a variable number of 3D medical volumes, such as MRI and CT scans, VoxelPrompt employs a language agent that iteratively predicts executable instructions to solve a task specified by a natural language input prompt. These instructions communicate with a vision network to encode image features and generate volumetric outputs (e.g., segmentations). VoxelPrompt interprets the results of intermediate instructions and plans further actions to compute discrete measures (e.g., tumor growth across a series of scans) and present relevant outputs to the user. We evaluate this framework on diverse neuroimaging tasks and show that the single VoxelPrompt model can delineate hundreds of anatomical and pathological features, measure many complex morphological properties, and perform open-language analysis of lesion characteristics. VoxelPrompt carries out these objectives with accuracy similar to that of fine-tuned, single-task models for segmentation and question-answering, while facilitating a large range of tasks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sub-Bottom Profiling Using an Autonomous Underwater Vehicle Equipped With a Sound Source and Towed Hydrophone Array</title>
<link href="https://hdl.handle.net/1721.1/158932" rel="alternate"/>
<author>
<name>Pfenninger, Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/158932</id>
<updated>2025-04-07T08:41:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Sub-Bottom Profiling Using an Autonomous Underwater Vehicle Equipped With a Sound Source and Towed Hydrophone Array
Pfenninger, Paige
Sub-bottom profiling using an autonomous underwater vehicle equipped with a source and a towed array is an excellent method to finely survey large areas of the ocean bottom with minimal interference from the water column. This approach has the benefit of being able to determine the range dependence of the sub-bottom on a meter-by-meter scale rather than assuming constant sub-bottom properties over a large range. This thesis conducts theoretical and experimental studies to investigate the feasibility of using the arrival times of acoustic signals from an autonomous underwater vehicle source to a short, 16-element towed hydrophone array to determine the sound speed and layer thickness of the seabed through Bayesian geoacoustic inversion. This method provides range-dependent geoacoustic parameters with a resolution on the order of 10 meters. Numerical studies indicate that, for timing data with low variance, arrival times can be used to accurately estimate seabed properties. However, the performance of the Bayesian inversion model deteriorates as the variance of the timing data increases. Experimental data were collected during the Seabed Characterization Experiment at the New England Mud Patch and the New England Shelf Break. This thesis attempts to improve the arrival times through the use of sub-array focusing but concludes that this method is not feasible due to the experimental data exhibiting a high level of variance in the sub-bottom timing returns, likely due to the presence of scatterers in the sediment layer. Therefore, the mean and variance of the direct path, bottom, and sub-bottom timing returns were calculated using Gaussian process regression. Furthermore, the results show that layer thickness and sound speeds are highly coupled, making it challenging to uniquely determine seabed properties.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design-technology Co-optimization for Sub-2 nm Technology Node Based on 2D Materials</title>
<link href="https://hdl.handle.net/1721.1/158931" rel="alternate"/>
<author>
<name>Yao, Aijia</name>
</author>
<id>https://hdl.handle.net/1721.1/158931</id>
<updated>2025-04-07T09:20:09Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design-technology Co-optimization for Sub-2 nm Technology Node Based on 2D Materials
Yao, Aijia
Emerging disruptive technologies such as Artificial Intelligence (AI) and 6G communications have driven stringent demands for hardware components that enable faster and more energy-efficient computation. With the diminishing returns of traditional silicon-based scaling and the escalating complexity of advanced semiconductor processes, two-dimensional (2D) materials offer promising opportunities when developed through Design-Technology Co-Optimization (DTCO). This thesis presents a comprehensive study of DTCO with a novel framework tailored for 2D material-based electronics that addresses critical challenges in material synthesis, device design, and circuit integration. In this framework, experimental material and device data are integrated into the design and optimization of MoS₂-based multichannel transistors (MCTs). With the help of DTCO, we have achieved record performance for double-gate, single-channel MoS₂ transistors as well as the first demonstration of high-performance, functional double channel MoS₂ transistors. Based on the results of MCTs, a Process Design Kit (PDK) is developed to facilitate circuit-level integration. These advancements constitute a promising foundation for the development of next-generation electronics beyond sub-2 nm technology node.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Encoder-Agnostic Learned Temporal Matching for Video Classification</title>
<link href="https://hdl.handle.net/1721.1/158930" rel="alternate"/>
<author>
<name>Ho, Darryl</name>
</author>
<id>https://hdl.handle.net/1721.1/158930</id>
<updated>2025-04-08T04:30:54Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Encoder-Agnostic Learned Temporal Matching for Video Classification
Ho, Darryl
In recent years, large transformer-based video encoder models have greatly advanced stateof-the-art performance on video classification tasks. However, these large models typically process videos by averaging embedding outputs from multiple clips over time to produce fixed-length representations. This approach fails to account for a variety of time-related features, such as variable video durations, chronological order of events, and temporal variance in feature significance. While methods for temporal modeling do exist, they often require significant architectural changes and expensive retraining, making them impractical for offthe-shelf, fine-tuned large encoders. To overcome these limitations, we propose DejaVid, an encoder-agnostic method that enhances model performance without the need for retraining or altering the architecture. Our framework converts a video into a variable-length temporal sequence of embeddings, which we call a multivariate time series (MTS). An MTS naturally preserves temporal order and accommodates variable video durations. We then learn pertimestep, per-feature weights over the encoded MTS frames, allowing us to account for variations in feature importance over time. We introduce a new neural network architecture inspired by traditional time series alignment algorithms for this learning task. Our evaluation demonstrates that DejaVid substantially improves the performance of a state-of-the-art large encoder, achieving leading Top-1 accuracy of 77.2% on Something-Something V2, 89.1% on Kinetics-400, and 88.6% on HMDB51, while adding fewer than 1.8% additional learnable parameters and requiring less than 3 hours of training time.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reducing Transformer Key-Value Cache Size with Cross-Layer Attention</title>
<link href="https://hdl.handle.net/1721.1/158929" rel="alternate"/>
<author>
<name>Brandon, William</name>
</author>
<id>https://hdl.handle.net/1721.1/158929</id>
<updated>2025-04-07T08:57:54Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
Brandon, William
Key-value (KV) caching plays an essential role in accelerating decoding for transformer-based autoregressive large language models (LLMs). However, the amount of memory required to store the KV cache can become prohibitive at long sequence lengths and large batch sizes. Since the invention of the transformer, two of the most effective interventions discovered for reducing the size of the KV cache have been Multi-Query Attention (MQA) and its generalization, Grouped-Query Attention (GQA). MQA and GQA both modify the design of the attention block so that multiple query heads can share a single key/value head, reducing the number of distinct key/value heads by a large factor while only minimally degrading accuracy. In this work, we show that it is possible to take Multi-Query Attention a step further by also sharing key and value heads between adjacent layers, yielding a new attention design we call Cross-Layer Attention (CLA). With CLA, we find that it is possible to reduce the size of the KV cache by another while maintaining nearly the same accuracy as unmodified MQA. In experiments training 1B- and 3B-parameter models from scratch, we demonstrate that CLA provides a Pareto improvement over the memory/accuracy tradeoffs which are possible with traditional MQA, potentially enabling future models to operate at longer sequence lengths and larger batch sizes than would otherwise be possible.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precision Pointing for the CubeSat Laser Infrared CrosslinK (CLICK) Mission</title>
<link href="https://hdl.handle.net/1721.1/158924" rel="alternate"/>
<author>
<name>Forester, Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/158924</id>
<updated>2025-04-07T08:48:19Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Precision Pointing for the CubeSat Laser Infrared CrosslinK (CLICK) Mission
Forester, Paige
Advances in Free Space Optical Communications have led to numerous missions that have demonstrated optical space-to-ground links, however, fewer missions have demonstrated optical space-to-space links. NASA’s CubeSat Laser Infrared CrosslinK (CLICK) Mission aims to be the first to demonstrate optical space-to-space communication on a CubeSat scale using Commercial Off the Shelf (COTS) components that include a micro electromechanical system (MEMS) fine steering mirror for precision pointing. The first phase of the CLICK mission, CLICK-A, launched in September 2022 to demonstrate optical downlink. The second phase, CLICK-B/C, aims to demonstrate optical crosslink between two spacecraft: CLICK-B and CLICK-C. Optical crosslink communication requires precision pointing for both spacecraft to close the link. The development of the CLICK-B/C Fine Pointing, Acquisition, and Tracking (PAT) is presented in this thesis, as well as the analysis of disturbance rejection and evaluation of expected spacecraft disturbances. This thesis also asses the slewing required for differential drag control which is used to maintain the crosslink range between the two CubeSats. Preliminary results are presented from the CLICK-B/C flight hardware integration and testing phases, as well as findings from simulation of the lasercom payload’s performance.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Solving Larger Games: Designing New Algorithms Adaptable to Deep Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/158922" rel="alternate"/>
<author>
<name>Liu, Mingyang</name>
</author>
<id>https://hdl.handle.net/1721.1/158922</id>
<updated>2025-04-07T09:27:19Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">On Solving Larger Games: Designing New Algorithms Adaptable to Deep Reinforcement Learning
Liu, Mingyang
In this thesis, we explore the design of algorithms capable of handling large games where the state space is too large to store strategies in a tabular format from a theoretical perspective. Specifically, we focus on developing algorithms suitable for deep reinforcement learning in two-player zero-sum extensive-form games. There are three critical properties for effective deep multi-agent reinforcement learning: (last/best) iterate convergence, efficient utilization of stochastic trajectory feedback, and theoretically sound avoidance of importance sampling corrections. Chapter 3 introduces Regularized Optimistic Mirror Descent (Reg-OMD), which provably converges to the Nash equilibrium (NE) linearly in last-iterate. Chapter 4 shows that algorithms based on regret decomposition enjoy best-iterate convergence to the NE. Chapter 5 proposes Q-value based Regret Minimization (QFR), which achieves all three properties simultaneously.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polymer Deconstructability and Recyclability via Introduction of Cleavable Si−O Bonds</title>
<link href="https://hdl.handle.net/1721.1/158921" rel="alternate"/>
<author>
<name>Johnson, Alayna</name>
</author>
<id>https://hdl.handle.net/1721.1/158921</id>
<updated>2025-04-07T08:46:20Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Polymer Deconstructability and Recyclability via Introduction of Cleavable Si−O Bonds
Johnson, Alayna
The synthesis of a new polysilylether via entropy-driven ring-opening metathesis polymerization (ED-ROMP) of cyclic bifunctional silyl ether-based monomers is reported. High molecular weight polymers (up to 100 k) with narrow dispersities were achieved at modest temperature. These polymers display excellent thermal stability and ultra-low T_g (–88 ºC). The polymers are both rapidly deconstructable via the cleavage of the labile silicon-oxygen linkages with either acid or fluoride triggers and partially depolymerizable by the addition of exogenous metathesis catalyst. Analysis of the deconstructed polymer products provided insight into the polymer microstructure, showing that the ED-ROMP process was regiorandom. Altogether, this work offers a new class of deconstructable polymers with a range of potential applications. Incorporation of these bifunctional silyl ether-based monomers into copolymers could aid in the triggered deconstruction of otherwise nondegradable hydrocarbon backbones.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Analysis of Voltage Feasibility Problems for&#13;
Cost-Effective Microgrids</title>
<link href="https://hdl.handle.net/1721.1/158920" rel="alternate"/>
<author>
<name>Jones, Aaron Jerome</name>
</author>
<id>https://hdl.handle.net/1721.1/158920</id>
<updated>2025-04-07T09:12:49Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Modeling and Analysis of Voltage Feasibility Problems for&#13;
Cost-Effective Microgrids
Jones, Aaron Jerome
Global efforts to mitigate climate change have led to a significant increase in the integration of renewable energy resources into the electricity grid. This transition not only necessitates the adoption of renewable energy technologies but also requires rethinking and redesigning existing power grid infrastructures to accommodate the unique characteristics of these resources. This research focuses on modeling techniques which can assist in analyzing the feasibility of microgrid topologies. Microgrids have emerged as a flexible and efficient approach to implementing novel grid topologies that support higher levels of renewable energy penetration. They also support the integration of distributed energy resources (DERs), such as photovoltaic (PV) systems, thereby promoting a more sustainable and efficient energy grid design. This thesis utilized sanitized load and system topology data from a real world microgrid located in Illinois to test the feasibility of increasing the number of PV units the system can utilize for reactive power support. &#13;
&#13;
In these systems, ensuring feasibility is a crucial concern due to power mismatches caused by the inherent variability of renewable resources. This work focuses of maintaining voltage within the constraints while increasing PV penetration on the system. We simulate the implementation of microgrids with PV generation using Alternating Current Optimal Power Flow (AC-OPF). The results of this thesis show the limits of feasible reactive power support from distributed PV units on a utility disconnected microgrid based on our voltage constraints. The study shows that there exists a limit to reactive power support provided by distributed PV units. Beyond this limit we see voltage collapse shown as infeasibility of power flow solutions. In order to avoid this problem we optimize the reactive power support from PV so that a solution exists within the constraints. The lesson learned for practical use of this result is that operators should use AC-OPF to compensate for reactive power using PV. Future research will explore the challenges and opportunities associated with the widespread adoption of microgrids, such as dynamic voltage instabilities that can occur with high levels of PV integration and complexities in inverter control strategies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Engineering of Protected Superconducting&#13;
Qubits</title>
<link href="https://hdl.handle.net/1721.1/158919" rel="alternate"/>
<author>
<name>Kim, Junghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158919</id>
<updated>2025-04-07T09:16:28Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Engineering of Protected Superconducting&#13;
Qubits
Kim, Junghyun
Building extensible quantum information processors becomes increasingly promising as the qubits exhibit longer coherence times. To this end, realizing protected qubits, whose Hamiltonians are inherently resilient to both relaxation and dephasing, has attracted strong interest. In this thesis, we primarily explore the soft 0 − π qubit, a leading candidate for implementing superconducting qubit protection with current fabrication techniques. To enhance protection, the soft 0 − π qubit requires its two major modes, the charge-mode (θ) and the flux-mode (ϕ), to satisfy an asymmetric condition: maximizing charge-mode capacitance while minimizing flux-mode capacitance. The main challenge is therefore reducing stray capacitance from the large charge-mode capacitor, which hinders the reduction of flux-mode capacitance. To address this challenge, we depart from the conventional coplanar interdigitated capacitor design and use parallel-plate capacitors (PPC) with small footprints, achieving the desired large charge-mode capacitance while reducing unwanted stray capacitances. By reducing the capacitor area by a factor of approximately 50, the PPC 0−π qubit has achieved an estimated Eᵠ_C /Eᶿ_C ratio of 30–50, placing it among the highest reported. Additionally, we propose enhanced mode-selective control of the soft 0−π qubit using these parallel-plate capacitors. Finally, we discuss the remaining challenges of the soft 0−π qubit and introduce alternative parameter regimes that can potentially improve Raman-based control and qubit readout.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating the Discovery of Novel Metal Organic Chalcogenolates: A Computational and Machine Learning-Driven Approach</title>
<link href="https://hdl.handle.net/1721.1/158916" rel="alternate"/>
<author>
<name>Ladera, Adriana J.</name>
</author>
<id>https://hdl.handle.net/1721.1/158916</id>
<updated>2025-04-07T08:38:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Accelerating the Discovery of Novel Metal Organic Chalcogenolates: A Computational and Machine Learning-Driven Approach
Ladera, Adriana J.
Metal Organic Chalcogenolates (MOChas) are a class of robust, self-assembling, and hybrid materials featuring inorganic metalo-chalcogen frameworks that are scaffolded by organic ligands. These low-dimensional structures exhibit tunable optoelectronic properties, making them promising candidates for various applications, including optical sensors and nanotechnology. This tunable relationship between MOCha structural arrangements and targeted properties opens up a vast yet challenging search space for novel MOCha structures. Density Functional Theory (DFT) can predict properties of materials with good accuracy, making it a powerful choice for even hypothetical materials. However, the discovery of novel MOChas structures is constrained by poor scalability of DFT relaxation times for large systems and a lack of high-throughput design methods that can capture the complex geometries of MOChas. In this work, we employ DFT calculations to investigate the energetic and electronic properties of various MOChas, and provide insight into the optical behavior and kinetic favorability of such structures. To address the computational bottlenecks of high-throughput design and DFT workloads, we discuss the use of machine-learned interatomic potentials and various generative models that can enable rapid prototyping of novel MOCha structures.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Dual Extruder Biomaterial 3D Printer</title>
<link href="https://hdl.handle.net/1721.1/158914" rel="alternate"/>
<author>
<name>de Alva, Jesse P.</name>
</author>
<id>https://hdl.handle.net/1721.1/158914</id>
<updated>2025-04-07T08:24:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Development of Dual Extruder Biomaterial 3D Printer
de Alva, Jesse P.
This research presents the design and fabrication of a novel dual-extruder biotic 3D printer for the precise deposition of natural biocomposites using organic materials such as pectin, chitosan, and cellulose. Unlike traditional FDM printers that rely on thermoplastic extrusion, this printer employs a syringe-based mechanical extruder capable of depositing viscous biomaterial hydrogels. The integration of a first-of-its-kind dual-extruder system enables the fabrication of multi-material prints and the exploration of biomaterial composites and complex geometric structures, thereby advancing sustainable, bio-inspired manufacturing.&#13;
This thesis emphasizes the machine engineering aspects of the printer's development, including project motivation, systematic design methodology, component design and fabrication, testing, and exploration of future work. Notable features of the system include user-friendly operation for non-experts, open-source accessibility, and compatibility with a wide range of biomaterials. By addressing existing limitations in biomaterial 3D printing technology, this work provides a robust platform to support future research in biomaterials, sustainable additive manufacturing, and bio-inspired design. Furthermore, the open-source nature of the printer fosters innovation and collaboration, accelerating the adoption of sustainable materials and manufacturing methods.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Annealing Techniques for Color Center Formation</title>
<link href="https://hdl.handle.net/1721.1/158913" rel="alternate"/>
<author>
<name>Christen, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/158913</id>
<updated>2025-04-08T04:31:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Annealing Techniques for Color Center Formation
Christen, Ian
Color centers in diamond have emerged as leading atom-like quantum systems for applications spanning from quantum repeaters to sensors. However, the optical and spin properties of engineered diamond color centers are limited by crystal damage produced during ion implantation, crystal irradiation, and annealing. In this thesis, we develop advanced material processing methods and characterization techniques to address critical challenges in the formation of high-performance diamond color centers to advance towards the efficient creation of desired dopant-vacancy centers with minimal formation of deleterious multi-vacancy clusters.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Responsible Computational Text Generation: AI Content Classification and Policy Framework</title>
<link href="https://hdl.handle.net/1721.1/158904" rel="alternate"/>
<author>
<name>Jung, Minseok</name>
</author>
<id>https://hdl.handle.net/1721.1/158904</id>
<updated>2025-04-07T09:23:04Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Responsible Computational Text Generation: AI Content Classification and Policy Framework
Jung, Minseok
Recent advances in generative AI, particularly in producing human-like text, have blurred the lines between human and AI authorship. Since these AI tools rely on stochastic generation rather than traditional scientific reasoning, concerns about misinformation and reliability have emerged, highlighting the need for AI detection tools and policy guidelines. In response, this study proposes a dual approach: (1) the application of adaptive thresholds to improve the use of AI text detectors and (2) an AI policy framework based on user patterns and opinions. To enhance detector performance, we present a threshold optimization algorithm that adapts to diverse subgroups, such as those based on text lengths and stylistic features, thereby reducing discrepancies in error rates. The commonly used method relies on a single universal threshold, which has led to inconsistent results across various text types because of different probability distributions. Our approach addresses these shortcomings by tailoring thresholds to the specific characteristics of each group. In parallel, the study examines the pressing need for comprehensive AI guidelines, given the rise of misinformation and academic integrity issues. While a few institutions have introduced comprehensive policies, many institutes lack approaches grounded in user patterns and opinions. To remedy this problem, we propose a policy framework based on a user study. The findings of this research will provide practical solutions for more effective AI text classification and a reliable framework for the necessity of AI writing policies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncovering the link between twin-twin interactions and damage nucleation in an (α+β) Ti alloy</title>
<link href="https://hdl.handle.net/1721.1/158903" rel="alternate"/>
<author>
<name>Cooper, Megan F. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/158903</id>
<updated>2025-04-07T08:32:41Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Uncovering the link between twin-twin interactions and damage nucleation in an (α+β) Ti alloy
Cooper, Megan F. L.
Recently, a (α+β) Ti alloy was developed with an outstanding combination of both high strength and high ductility; however, the plasticity micromechanisms that lead to damage nucleation for this alloy had not yet been investigated in detail. In this work, post-mortem analysis and an in-situ SEM-EBSD tensile experiment were conducted to determine where damage was nucleating most frequently in the microstructure, and what deformation modes were associated with damage nucleation. Damage within primary α grains was found to be the most common, with most of these damage incidents occurring along {10̅12} twin-twin boundaries with a ~60° misorientation. The {10̅12} twinning mode is only activated in the localized neck, and twin activation is strongly dependent on initial crystallographic texture. The twinned domains are rotated such that prismatic slip is easier to activate, but prismatic slip transfer is unlikely across ~60° twin-twin boundaries due to geometric incompatibilities. The in-situ test revealed that a crack formed along a ~60° twin-twin boundary where slip was blocked. These findings provide new insights into how twin-twin interactions in Ti alloys can lead to damage nucleation and impact overall ductility.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Concepts for High-Acceleration Linear Actuators&#13;
for Precision Motion</title>
<link href="https://hdl.handle.net/1721.1/158901" rel="alternate"/>
<author>
<name>Kim, Adam K.</name>
</author>
<id>https://hdl.handle.net/1721.1/158901</id>
<updated>2025-04-08T04:28:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design Concepts for High-Acceleration Linear Actuators&#13;
for Precision Motion
Kim, Adam K.
Advances in semiconductor photolithography scanners have made it possible to produce smaller, more affordable chips with higher throughput. Some of the key lithographic scanner components supporting these advancements are electromagnetic actuators responsible for positioning the long-stroke (LS) and short-stroke (SS) stages of the reticle stage in its scan direction. Such actuators need to provide the highest thrust at the deceleration and reacceleration phases when the stages turn around at the ends of the scanning trajectory. Thus, enhancing their acceleration capability and force output is essential for boosting chip throughput. However, the improved performance may demand large current densities that are unsustainable in terms of the associated power dissipation generated by ohmic losses in the copper coils. In this thesis, we continued a previous study conducted in our lab that explored the use of mechanical contact forces managed by a piezoelectric stack actuator (PEA). In this configuration, intermittent contact by the PEA can be used to apply forces to decelerate and reaccelerate the SS stage with respect to the LS stage during turnaround events. With such force assist, the non-contact precision actuators responsible for positioning the SS stage with respect to the LS stage no longer need to generate large thrusts for the deceleration and reacceleration. As a result, we can in principle decrease the weight and power loss of the SS-stage precision actuators, which thus lowers the thrust requirements for the LS-stageAdvances in semiconductor photolithography scanners have made it possible to produce smaller, more affordable chips with higher throughput. Some of the key lithographic scanner components supporting these advancements are electromagnetic actuators responsible for positioning the long-stroke (LS) and short-stroke (SS) stages of the reticle stage in its scan direction. Such actuators need to provide the highest thrust at the deceleration and reacceleration phases when the stages turn around at the ends of the scanning trajectory. Thus, enhancing their acceleration capability and force output is essential for boosting chip throughput. However, the improved performance may demand large current densities that are unsustainable in terms of the associated power dissipation generated by ohmic losses in the copper coils. In this thesis, we continued a previous study conducted in our lab that explored the use of mechanical contact forces managed by a piezoelectric stack actuator (PEA). In this configuration, intermittent contact by the PEA can be used to apply forces to decelerate and reaccelerate the SS stage with respect to the LS stage during turnaround events. With such force assist, the non-contact precision actuators responsible for positioning the SS stage with respect to the LS stage no longer need to generate large thrusts for the deceleration and reacceleration. As a result, we can in principle decrease the weight and power loss of the SS-stage precision actuators, which thus lowers the thrust requirements for the LS-stage actuators responsible for accelerating both the LS and SS stages, resulting in lowered power consumption. Using the single degree-of-freedom experimental setup previously built in our lab, we conducted several characterization experiments to develop a PEA position feedback controller augmented by a hysteresis-compensated feedforward trajectory to shape the contact compression and forces. We find that introducing a viscoelastic contact interface is essential for stabilizing the PEA controller and slowing the contact dynamics to remain within the controller bandwidth. Our feedforward trajectory successfully brings a 0.84 kg mass moving towards the PEA with an initial speed of 60 mm/s to zero velocity in approximately 1.5 ms using 36 µm of PEA stroke length. These results demonstrate the feasibility of using PEAs as mechanical assist devices for high-acceleration turnaround events in lithography tools.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting the lift of a randomly maneuvering airfoil&#13;
under dynamic stall conditions, Re ∼ 10⁵</title>
<link href="https://hdl.handle.net/1721.1/158900" rel="alternate"/>
<author>
<name>Kim, Donghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158900</id>
<updated>2025-04-07T08:28:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Forecasting the lift of a randomly maneuvering airfoil&#13;
under dynamic stall conditions, Re ∼ 10⁵
Kim, Donghyun
Dynamic stall is the abrupt flow separation from airfoils rapidly changing their orientation. This phenomenon, characterized by a delayed stall followed by a sharp drop in lift, has prompted efforts to prevent or delay it. This study aims to predict the lift of an airfoil randomly maneuvering under dynamic stall conditions by utilizing sparse surface pressure measurements, which we believe can maximize the effectiveness of various dynamic stall suppression techniques. Using data from large eddy simulations, we demonstrate that a long short-term memory network, fed with raw surface pressures, delivers accurate predictions. Also, a new method introduced here, IdDM, conclusively links the characteristic frequency range of pressure fluctuations that emerges during the dynamic stall to the chord-lengthscale vortex dynamics. However, further analysis suggests that the forecast predominantly relies on the lower frequency components tied to the airfoil motion, possibly because the vortex dynamics are dependent on and sensitive to the airfoil motion. Meanwhile, specific sensor locations are proven to be more informative than others in this random, unsteady flow, and we show that optimal sensor placement can be quickly determined using mutual information alone. It reveals that two pressure sensors positioned near the leading edge, one on each side of the airfoil, capture most of the information needed to predict lift. The lift can be predicted with sparse sensors because surface pressures are strongly correlated across the airfoil, with large-scale flow structures dominating the forces.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Visual Intelligence from Photons to Action</title>
<link href="https://hdl.handle.net/1721.1/158899" rel="alternate"/>
<author>
<name>Young, Aaron</name>
</author>
<id>https://hdl.handle.net/1721.1/158899</id>
<updated>2025-04-08T04:20:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Designing Visual Intelligence from Photons to Action
Young, Aaron
For embodied agents to perceive and effectively act within their environment, they must sense the world around them and translate this information into meaningful and safe actions; a process fundamental to both biological and human-engineered systems. Nature has evolved highly attuned visual systems, resulting in diverse and efficient eyes capable of facilitating complex behaviors. Conversely, roboticists have engineered sophisticated cameras and sensors, enabling robots to perform tasks beyond the capabilities of natural systems. This thesis explores the design of visual intelligence by integrating insights from both biology and engineering in two complementary parts. In Part I, we computationally recreate the evolution of vision within simulated embodied agents. By evolving the physical and neural aspects of vision in simulation - and training these visually-capable agents with deep reinforcement learning - we demonstrate that task-specific environmental pressures lead to distinct eye morphologies and behaviors, mirroring observations in biological evolution. This in silico approach enables us to investigate the fundamental principles underlying the emergence of animal eyes and provides a framework for exploring novel sensor designs subject to both biological (e.g., survival) and engineering constraints (e.g., manufacturability). In Part II, we leverage visual cues not typically used in nature (i.e., active illumination and multi-bounce light) to demonstrate enhanced robotic navigation via non-line-of-sight imaging. Using single-photon LiDARs, we capture the temporal propagation of individual photons, enabling the detection of objects around corners. This sensing capability allows us to develop robots that effectively anticipate and avoid hidden obstacles, reducing navigation time by 50% and overall trajectory length by 33%. Together, these works demonstrate how the synthesis of biologically-inspired design principles with advanced sensing modalities can enhance embodied agents' capabilities, while providing insights into both natural vision evolution and robotic perception.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Archean origin of assimilatory sulfate metabolisms provides novel insight into redox conditions of early Earth environments</title>
<link href="https://hdl.handle.net/1721.1/158898" rel="alternate"/>
<author>
<name>Payette, Jack G.</name>
</author>
<id>https://hdl.handle.net/1721.1/158898</id>
<updated>2025-04-07T09:26:51Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Archean origin of assimilatory sulfate metabolisms provides novel insight into redox conditions of early Earth environments
Payette, Jack G.
Dissimilatory sulfur metabolisms recording differing biological isotopic fractionation are well studied, important components of sulfur cycling (Mateos et al., 2023). Assimilatory sulfur metabolisms and genes across life provide a complementary window into sulfur biogeochemistry with individual pathways having specific isotopic fractionations acting on distinct redox states (e.g. sulfate, sulfide, sulfite) for anabolism (Liu et al., 2012). An assimilation pathway exists, which starts with sulfate adenylyltransferase (sat/ATP sulfurylase) catalyzing a reaction of adenosine triphosphate (ATP) and sulfate (SO42-) resulting in adenosine 5’-phosphosulfate (APS), and incorporation of more reduced sulfur into biomolecules. This sat/ATP sulfurylase enzyme represents the first step required by life to incorporate sulfate and informs our understanding of biological processes performing this fundamental chemical reaction. A phylogenetic and molecular clock analysis of the sat/ATP sulfurylase protein family (E.C. 2.7.7.4) was performed to determine the age of sulfate assimilation proteins. Extant diversity of sat proteins was estimated to have a last common ancestor ~3.24 Ga (95% CI 3.52–3.06 Ga) using relaxed molecular clocks calibrated with eukaryotic and cyanobacteria age ranges from previously published fossil calibrated investigations. These results suggest sulfate cycling in Paleoarchean environments, despite extensive evidence of low marine sulfate concentrations (Crowe &amp; Canfield et al., 2014). Archean sulfate biogeochemical cycling could result from microbial sulfur oxidation and sources could include abiotic oxidation of volcanic sulfur, hydrothermal processes or pyrite (Canfield, 2001, Lyons et al., 2024). This phylogenomic evidence of sulfate during Archean times provides an independent complement to geochemical records and indicates that sulfur redox chemistry during the Archean was likely more complex than previously described.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Falling isn't the End: Reimagining Demolition as a Creative Practice</title>
<link href="https://hdl.handle.net/1721.1/158896" rel="alternate"/>
<author>
<name>Lee, So Jung</name>
</author>
<id>https://hdl.handle.net/1721.1/158896</id>
<updated>2025-04-08T04:12:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Falling isn't the End: Reimagining Demolition as a Creative Practice
Lee, So Jung
This thesis investigates resilience not as an endpoint but as a condition of continuous transformation. It critiques the shortcomings of current architectural discourse in addressing climate disasters, waste, and carbon footprints. While these crises are widely acknowledged, architecture often operates within restrictive economic, legal, and cultural systems, relegating resilient design to the periphery or diminishing its potential impact.&#13;
Collapse, traditionally perceived as failure, is reimagined here as a generative moment—an opportunity to rethink materials, systems, and the narratives that shape them. Central to this exploration is the concept of assembly, where materials are designed with deliberate life spans—some transient, others enduring. By anticipating the gaps and shifts that arise when permanence is no longer assumed, this thesis proposes new possibilities for adaptive design and architectural resilience within the evolving rhythms of life.&#13;
To articulate these ideas, the thesis employs speculative scenarios and temporal media. These tools position architecture as a system in flux, evolving in tandem with societal and environmental changes. Through narrative-driven methodologies, this work seeks to expand architectural discourse, prompting reflection on the discipline’s foundational assumptions while connecting it to broader cultural and systemic challenges.&#13;
Ultimately, this thesis redefines resilience—not as resistance or mere survival but as a dynamic and imaginative practice. It advocates for architecture’s leadership within the broader zeitgeist of sustainability, transforming pressing global challenges into opportunities for creative agency and systemic reinvention.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>American (Ise): On the Lifecycle of Stadiums in the United States</title>
<link href="https://hdl.handle.net/1721.1/158895" rel="alternate"/>
<author>
<name>Wang-Xu, Mackinley</name>
</author>
<id>https://hdl.handle.net/1721.1/158895</id>
<updated>2025-04-08T04:40:25Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">American (Ise): On the Lifecycle of Stadiums in the United States
Wang-Xu, Mackinley
When the Kingdome in Seattle was completed in 1976, it was celebrated as a marvel of modern engineering, expected to last for centuries. Yet, in an ironic twist, it was demolished by implosion in 2000, surviving only twenty-four years. The Kingdome epitomizes the issue of short lifespans that has plagued American stadiums since the post-war era. A broad survey of these structures reveals an average lifespan of just three decades—a startlingly brief tenure for buildings of their scale and significance. These stadiums also follow a distinctive model of renewal. Similar to the Shikinen Sengu ritual at the Ise Shrine, a new stadium is often constructed adjacent to its predecessor. However, unlike Ise, where materials from the old shrine are reused and disseminated throughout Japan’s network of shrines, old stadiums are almost always demolished and discarded. This thesis seeks to superimpose Ise as a model onto American stadiums, envisioning an architecture that embraces both impermanence and longevity through circularity. Investigations into the barriers to circularity specific to stadiums serve as the foundation for design proposals, spanning scales from the detail to the site. The project ultimately imagines a stadium in a constant process of disassembly and renewal, where its spatial and programmatic potential challenge paradigms of completeness. In the context of a climate crisis demanding waste reduction, and for a typology notorious for its excess, stadiums can learn to do more with less.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Insurance</title>
<link href="https://hdl.handle.net/1721.1/158891" rel="alternate"/>
<author>
<name>Janson, Charles Perot</name>
</author>
<id>https://hdl.handle.net/1721.1/158891</id>
<updated>2025-04-07T08:25:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Building Insurance
Janson, Charles Perot
Over the past 350 years, the building insurance industry has been shaped by a series of major urban fires, each incrementally standardizing risk assessment and property valuation as financial products of risk management. In recent years, however, climate change has introduced unprecedented weather events that challenge the fine tuned models of insurance; in particular, the rise of wildfires in California and the Pacific Northwest have led to local withdrawal of insurance altogether. Within these contexts, the spatial conditions inherited by a highly insured past continually sustain separation, individual prosperity, and standard assemblies as inheritances of expansionist agendas. At this juncture of system failure, this thesis asks: how can architecture rethink more cooperative forms of building and living together that localize risk sharing, responsibility, and stewardship? While wildfire defense strategies put forth by insurance companies and building code armor stick-frame American single family home and its aesthetic traditions, this thesis proposes a new building typology entirely: a neighborly cooperative of adjoined homes. Under a single roof, property lines are transformed into sites of mutual stewardship, manifesting insurance no longer as an abstract response to risk, but as a series of social and spatial relationships between neighbors.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Chongqing Tiandi Project: An Asset Management Perspective</title>
<link href="https://hdl.handle.net/1721.1/158890" rel="alternate"/>
<author>
<name>Yang, Junsi</name>
</author>
<id>https://hdl.handle.net/1721.1/158890</id>
<updated>2025-04-08T04:31:31Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Evaluating Chongqing Tiandi Project: An Asset Management Perspective
Yang, Junsi
This thesis uses the Chongqing Tiandi project as a case study to analyze the entire process of development and asset management for large-scale urban renewal projects in China's second-tier cities. It focuses on the motivations and outcomes of Shui On Land's transition from an asset-heavy to an asset-light model. Based on theoretical analysis (Chapter 2), corporate-level financial analysis (Chapter 3), and project-level in-depth studies and interviews (Chapter 4), the thesis explores the logic and impact of this strategic transformation from multiple perspectives. The theoretical analysis summarizes real estate lifecycle management theory, portfolio theory, and corporate strategic transformation theory, providing a framework to examine Shui On Land's strategic decisions. The financial analysis reveals that, from 2015 to 2017, Shui On Land faced significant financial pressure with high debt ratios and cash flow constraints, necessitating systematic asset disposals. While the company disposed of multiple assets during this period, Chongqing Tiandi's 79.2% equity disposal was particularly strategic due to its position as a high-risk, low-return asset within the company's portfolio. The project-level analysis and interviews demonstrate that replicating successful development models from first-tier cities in second-tier markets faces unique challenges. In Chongqing Tiandi's case, these challenges manifested in multiple ways: limited residential price premiums due to local land supply policies, substantial investment requirements for super high-rise developments exceeding $1 billion, and persistently low office rental rates in the local market. These factors compromised the project's financial self-sustainability and made it particularly vulnerable in Shui On's portfolio, especially when compared to projects in other second-tier cities like Wuhan. The development and subsequent equity sale of Chongqing Tiandi not only provided essential financial support for Shui On Land but also reflected a strategic decision to divest from a project where market conditions created both immediate challenges and future uncertainties. This research provides valuable references for the development of large-scale projects in China's second-tier cities, emphasizing the need for developers to utilize funds efficiently, adapt flexibly to market changes, and focus on achieving long-term value. These insights hold significant implications for sustainable development in complex market environments.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooling Innovation and Circularity: Addressing Water Stress in the Age of AI-Driven Data Centers</title>
<link href="https://hdl.handle.net/1721.1/158889" rel="alternate"/>
<author>
<name>Kseibati, Reem</name>
</author>
<id>https://hdl.handle.net/1721.1/158889</id>
<updated>2025-04-07T09:18:33Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Cooling Innovation and Circularity: Addressing Water Stress in the Age of AI-Driven Data Centers
Kseibati, Reem
This thesis examines the growing demand for data centers and the critical challenges posed by their water and energy consumption. As artificial intelligence (AI) technologies expand, the infrastructure supporting these systems has become essential. The study highlights the projected increase in data center capacity driven by AI workloads and focuses on the impact in water-stressed regions across the United States. Given the resource-intensive nature of data centers, the research explores cooling technologies aimed at reducing environmental impact. Traditional air cooling is compared with innovative liquid and evaporative cooling techniques. Additionally, the thesis promotes circular economy principles, emphasizing resource efficiency, reuse, and regeneration as a pathway to sustainable operations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Living in Seoul: Addressing Housing Needs and Redefining Rental Market Trends</title>
<link href="https://hdl.handle.net/1721.1/158888" rel="alternate"/>
<author>
<name>Park, Suhyeon</name>
</author>
<id>https://hdl.handle.net/1721.1/158888</id>
<updated>2025-04-07T08:26:11Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Co-Living in Seoul: Addressing Housing Needs and Redefining Rental Market Trends
Park, Suhyeon
Co-living emerged as a novel asset class in the mid-2010s, addressing the housing needs of urban residents affected by rising housing costs, increasing urban migration, and the growing prevalence of single-person households. In South Korea, co-living has gained attention as a viable alternative to traditional housing, driven by unique local dynamics, including the decline of the dominant Jeonse system and a significant shortage of housing tailored to single-person households. With a growing preference for monthly rental systems over the Jeonse systems, both local conglomerates and start-ups have capitalized on the opportunity to offer company-operated co-living spaces. As the market grows, major international investors and global co-living providers have also entered, reflecting a unique market environment where institutionalized housing options are expanding alongside a notable shift in rental transaction systems. In this new era of urban housing, co-living is rapidly expanding and gaining popularity. This thesis seeks to answer the following question: What factors have driven the emergence and growth of the co-living market in Seoul, and what is its growth potential? To address this, it starts with an analysis of market drivers, provider strategies, and regulatory developments, followed by projections of market potential and an assessment of potential threats and mitigation strategies for long-term viability of co-living in Seoul. The goal is to offer insights for co-living providers to optimize their spaces and services. The findings suggest that while co-living addresses unmet housing demand, its long-term success depends on balancing operational efficiency with tenant satisfaction. While these strategies are applicable in other cities, they are particularly critical in Seoul, where the Jeonse system remains a strong and historically preferred alternative. In Seoul, co-living serves a dual mission: introducing an innovative housing model and reshaping the paradigm of the Wolse rental housing system. To succeed, co-living operators must clearly articulate their unique value proposition, addressing both the housing needs of urban residents and the broader evolution of the rental market.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ending Well, Making the Harvest-Paths of Our Values</title>
<link href="https://hdl.handle.net/1721.1/158886" rel="alternate"/>
<author>
<name>Kpodo, Courage Dzidula Kwaku</name>
</author>
<id>https://hdl.handle.net/1721.1/158886</id>
<updated>2025-04-07T09:22:58Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Ending Well, Making the Harvest-Paths of Our Values
Kpodo, Courage Dzidula Kwaku
Any single story shrinks all others. In a place historically cultivated for the cocoa cash crop, this thesis proposes reorienting architectural practice towards a plural valuing of land and its constituent spirits. The journey begins in 2022 with my acquisition of a 99-year lease for a 5-acre land in Ghana. Prior to the conception of an academic proposal, this was to preserve and grow ecological and financial value through time.&#13;
Located on a hill-cluster in the Eastern Region, this place is crucial as the birthplace of Ghana’s cocoa industry, which became the world’s largest exporter by 1911. Spurred by economic and colonial incentives, farmer-settlers acquired and cultivated forest land including the one I presently steward. They forged communities that live on despite a subsequent decline of cocoa production in the region. Five centuries of colonial influence in West Africa reduced a plural landscape into singular extractive narratives, creating place-names like the Gold Coast, renamed Ghana after independence. The capitalist framework of monocultural extraction, one reliant on a colonial government and its land survey department, continues under contemporary African states. Architecture and planning—a practice historically tied to power and capital—remains instrumental in this system, often overlooking other ways of valuing land.&#13;
This thesis confronts the dispositions of an inherited profession by foregrounding the practices and materials of a socio-cultural paradigm. It is epitomized by the tree called Newbouldia laevis (African boundary tree) and its plural meanings in West Africa. It follows a cocoa harvest-path from a community named after a farmer-settler, Yaa-Aso, and ascends the hills, crossing the land limits of 7 farmers. It ends on the land I hold, with a lease ending in CE 2122.&#13;
In July 2024, I led a convocation of the farmers along the path in the defunct cocoa distribution building, toward framing futures based on other values apart from capital. 3 languages were spoken in that gathering - Twi, Anlo-Eʋe and English. It resulted in a 7-foot expansion of the path, and the pacification of a seasonal spirit-stream that crosses it. They set the context for imagining a series of 5 moments, herein recorded, that explore a value system of things spiritual and communal, offered by the transgressions of a widened path and the land I hold at its end.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Sustainable Recommender Systems</title>
<link href="https://hdl.handle.net/1721.1/158881" rel="alternate"/>
<author>
<name>Huang, Lei</name>
</author>
<id>https://hdl.handle.net/1721.1/158881</id>
<updated>2025-04-07T09:14:07Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Designing Sustainable Recommender Systems
Huang, Lei
Recommender systems are widely deployed to serve users with content they like. However, content must be created and insufficient demand dampens a creator’s production incentive. We argue that the canonical recommender system may not be sustainable if, by promoting the content each user likes the most, it suppresses the creation incentive of the less popular but still valuable content. We propose a “sustainable recommender system” solution – subsidize creators with demand according to their “sensitivity,” which measures how easily a creator can be incentivized by demand, and their “contribution,” which measures how important a creator is to users overall. Theoretically, we prove that this algorithm maximizes long-term user utility by internalizing the externality of user choice on other users. Computationally, our main innovation is to estimate creator contribution using computer vision, where we train a deep-learning model to compute how creator distribution affects system-wide user utility. Analyzing data from a large content platform, we show that our algorithm incentivizes valuable creators and sustains long-term user experience.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Place: Unlocking Value for Investors by Integrating&#13;
Indigenous Values in Luxury Hospitality</title>
<link href="https://hdl.handle.net/1721.1/158878" rel="alternate"/>
<author>
<name>Peragallo, Nadra Alia</name>
</author>
<id>https://hdl.handle.net/1721.1/158878</id>
<updated>2025-04-07T09:02:36Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Empowering Place: Unlocking Value for Investors by Integrating&#13;
Indigenous Values in Luxury Hospitality
Peragallo, Nadra Alia
The luxury hospitality industry has long been attuned to shifting consumer preferences, particularly as travelers increasingly seek unique, meaningful experiences. In today’s global market, trends centered on personalization, wellness, authenticity, and regeneration—further accelerated in the post-pandemic travel era—present both challenges and opportunities for real estate investors. This shift raises a critical question: How and where can value be unlocked in this evolving landscape?&#13;
&#13;
This thesis explores how real estate investors can maximize value creation in the luxury hospitality sector by leveraging traditional performance metrics alongside a complementary&#13;
framework designed to uncover underexplored opportunities and enhance collaboration among stakeholder groups. Through the analysis of two case studies—Salterra Resort &amp; Spa in South&#13;
Caicos, Turks &amp; Caicos Islands, British West Indies, and Puntacana Resort and Club in the Dominican Republic—the study demonstrates the practical application of this framework in&#13;
tropical, coastal, and island regions, where the interaction between tourism, local communities, and fragile ecosystems is particularly pronounced. By showcasing its success, this research provides adaptable stakeholder rubrics and qualitative system dynamics causal loop diagrams as templates, while broadening the scope for innovation and inspiring further exploration of sustainable, value-driven approaches in luxury hospitality.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CO₂ Capture with Lithium Oxide in Molten Salt Media : A Case Study of CO₂ Capture via Electrochemically Produced Metal Oxide</title>
<link href="https://hdl.handle.net/1721.1/158875" rel="alternate"/>
<author>
<name>Byun, Gi Hyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158875</id>
<updated>2025-04-07T09:19:18Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">CO₂ Capture with Lithium Oxide in Molten Salt Media : A Case Study of CO₂ Capture via Electrochemically Produced Metal Oxide
Byun, Gi Hyun
As the unprecedented temperature rise originating from anthropogenic carbon dioxide (CO₂) emission intensifies, the development of post-combustion carbon capture technologies has been urged. Although its maturity, conventional thermal swing processes using aqueous amines, suffer from significant limitations, including high energy requirements and sorbent degradation. Electrochemical CO₂ capture technologies, which use electrical energy instead of thermal energy, have emerged as an energy efficient way to capture CO₂. This shift not only improves energy efficiency but also reduces reliance on fossil fuels, further contributing to reduction in CO₂ emissions. This work explored the potential of electrochemical metal oxide formation for CO₂ capture, a promising alternative to amine-based systems due to its exceptional sorbent (i.e., metal oxide) stability. Li₂O in eutectic mixture of potassium nitrate (KNO₃) and lithium nitrate (LiNO₃) was chosen as a case study due to the relatively well-understood chemistry of the system and the potential synergistic effects between metal oxide and the molten salt. Primarily, we investigated the synergistic effect of Li₂O in nitrate molten salt via thermal gravimetric analysis. Next, electrochemically produced Li₂O by reduction of oxygen gas was tested as a CO₂ sorbent while investigating parameters affecting its conversion to lithium carbonate (Li₂CO₃). Through this study, we suggested dissolution model as a crucial pathway for conversion. Lastly, we explored the effect of adding nitrite ion (NO₂⁻) to the molten salt. Irreversible side reaction between NO₂⁻ and CO₂ was confirmed with X-ray diffraction and NOₓ measurement. This thesis demonstrates the feasibility of electrochemical metal oxide-based CO₂ capture, highlighting some considerations in the capture step.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost Optimized Logistics for Commercial Operations in Low Earth Orbit and Cislunar Space</title>
<link href="https://hdl.handle.net/1721.1/158866" rel="alternate"/>
<author>
<name>Brown, Ireland</name>
</author>
<id>https://hdl.handle.net/1721.1/158866</id>
<updated>2025-04-08T04:38:57Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Cost Optimized Logistics for Commercial Operations in Low Earth Orbit and Cislunar Space
Brown, Ireland
Designing profitable mission and logistics architectures is necessary to establish a profitable commercial market and support a robust space economy. It is the goal of the National Aeronautics and Space Administration (NASA) to establish such an economy in low Earth orbit (LEO) through the implementation of commercial LEO destinations and to commission self-sustaining lunar infrastructure through the Artemis missions. The ISS and the Apollo lunar landers demonstrated the ability to provide safe and reliable habitation, but the cost to support these missions has been on the order of billions of United States Dollars (USD). Minimizing the operational costs of commercial space systems will be required if commercial companies expect to generate a profit from their services. To address this, this thesis derives and demonstrates a manual cost optimization method for space system mission architectures, with respect to logistical and system design. In tandem, a computational tool called the Cost model for Space system Operations (COST-O) was developed. The demonstration included the iteration of a logistics and system design vector for two cases: a commercial LEO space station, and a commercial lunar in-situ resource utilization (ISRU) liquid oxygen generation system. These mission architectures were modelled and simulated in SpaceNet which first analyzed for feasibility and then were processed by COST-O. This data was used to make financial forecasts and were analyzed for cost sensitivity. The results suggest that for a commercial LEO space station, a closed loop ECLSS, large stockpile of resources, reduced resupply cadence, and a combination of tourists and visiting crew would be a profitable architecture at the crew capacity of at least three paying customers present on the station per day with an annual operational cost of 1,129,731,710 USD. Profits would be achieved by the end of ten years of steady state operations at the current market price of 3.12 million USD per crew member per day. Attempts to minimize this cost should first be made in the cadence of funded astronaut technician flights, as crew launches contribute most to the overall operational cost. Future work should address ways to minimize this, such as reducing the required amount of astronaut technicians that must be present at any given time. For a commercial lunar ISRU liquid oxygen generation system, an architecture supporting a closed loop system, using Starship as the launch and landing vehicle, a prepositioned stockpile of resources at the lunar surface, and a hydrogen reduction agent is most cost optimal, with an annual operating cost of 19,275,486,559 USD, and profitability achieved at the design rate of twenty metric tons of liquid oxygen produced and sold per year. At the current market price of 1.2 million USD per kilogram, the system would be profitable by the end of the first year of steady state operations. Attempts to minimize this operational cost further should improve the recyclability of the system. Future work should evaluate added robustness to the architecture by delivering multiple systems and should model deliberate cargo packing decisions.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>For and Beyond the Plaques: Sustainable Certification Adoption&#13;
 and Its Impact on Real Estate Decision-Making in the Boston-Cambridge Market</title>
<link href="https://hdl.handle.net/1721.1/158865" rel="alternate"/>
<author>
<name>Huang, Shenglin</name>
</author>
<id>https://hdl.handle.net/1721.1/158865</id>
<updated>2025-04-08T04:09:19Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">For and Beyond the Plaques: Sustainable Certification Adoption&#13;
 and Its Impact on Real Estate Decision-Making in the Boston-Cambridge Market
Huang, Shenglin
As demand for green and healthy buildings grows, real estate developers face complex decisions regarding building certification adoptions, which have become influential in real estate market dynamics. This thesis investigates how developers in the competitive Boston-Cambridge area navigate the sophisticated certification landscape—focusing on LEED, ENERGY STAR, WELL, Fitwel, and WiredScore/SmartScore—to gain competitive advantages, attract and retain tenants, maximize financial performance, and align with regulatory requirements and ESG goals.&#13;
Using a mixed-methods approach, including quantitative analysis of certification overlaps and trends, along with qualitative insights from industry interviews, the study provides a comprehensive understanding of how real estate developers strategically use certifications to influence asset value while meeting tenant and investor expectations. Findings offer potentially actionable insights into how certifications shape market positioning and inform the decision-making process in real estate development.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using AI to Improve Price Transparency in Real Estate Valuation</title>
<link href="https://hdl.handle.net/1721.1/158862" rel="alternate"/>
<author>
<name>Xu, Cunjia</name>
</author>
<id>https://hdl.handle.net/1721.1/158862</id>
<updated>2025-04-08T04:14:36Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Using AI to Improve Price Transparency in Real Estate Valuation
Xu, Cunjia
This thesis explores the integration of artificial intelligence (AI) into real estate valuation, focusing on visual property attributes to enhance traditional Hedonic models. By incorporating Vision Language Models (VLMs) and generative AI, the research evaluates the potential of these technologies to assess non-standard variables like aesthetic appeal, condition and cohesiveness of interior and exterior property photos. The study contrasts traditional hedonic regression models, which rely on quantifiable factors such as square footage and location, with a new approach that includes AI-generated scores derived from property photos. The study employs three distinct models: the No_Rubric Model, the Composite Model, and the Verbose Model with the Hedonic model serving as the baseline for evaluating their performance. The results demonstrate that incorporating visual data significantly improves model&#13;
accuracy, aligning valuations more closely with buyer preferences and sold prices. This shift addresses the industry's need for price transparency and highlights how developers can design properties that better meet market demands.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microfluidic Platform for Vascularized Tissue Models</title>
<link href="https://hdl.handle.net/1721.1/158859" rel="alternate"/>
<author>
<name>Johnson, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/158859</id>
<updated>2025-04-07T09:05:53Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Microfluidic Platform for Vascularized Tissue Models
Johnson, Matthew
This thesis presents a microfluidic platform designed to support 3D vascularized tis­sue models for microphysiological systems. The platform delivers pneumatic pressure and vacuum signals to drive fluid flow and pressure on tissue culture devices with integrated pumps and back-pressure regulators. The mechanical performance of the pumps and back-pressure regulators is characterized. Tissue compartments in each device contain endothelial and stromal cells suspended in a hydrogel during culture. An oxygenating reservoir stores and replenishes oxygen in circulating cell culture me­dia. During assembly, screws are used to compress an elastomeric membrane, forming a seal and transmitting pneumatic pressure signals from the connection manifold to acutate the fluidic control elements. After a biological experiment the tissue culture devices can be disassembled, cleaned, and re-used, thus enabling cost-effective experi­mentation and prototyping. Each of the 4 layers of the tissue culture devices arc ma.de of thermoplastic polymers, and their design is translatable to injection molding for future production at scale. The design and manufacturing methods for the platform and individual device features are discussed. Two major biological experiments are presented to demonstrate the platform's ability to support emergent vascularization in the tissue culture device over 7 days. Microscope images show development of perfusable microvessel networks.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Engineered Skeletal Muscle Rings as Actuators Using Strain Sensing Methods</title>
<link href="https://hdl.handle.net/1721.1/158858" rel="alternate"/>
<author>
<name>Rosado, Laura M.</name>
</author>
<id>https://hdl.handle.net/1721.1/158858</id>
<updated>2025-04-07T08:26:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Characterizing Engineered Skeletal Muscle Rings as Actuators Using Strain Sensing Methods
Rosado, Laura M.
A novel instrument was designed to characterize a force exertion model of engineered skeletal muscle rings. The instrument uses strain gauges to transduce a muscle ring contraction and has a verified resolution of 5 μN and 1.4 μm over the ranges of 5 μN and 1400 μm respectively. Experiments were carried out with four muscle ring specimens at six different structural stiffnesses. Each ring was excited at 1 Hz for 30 seconds while force and displacement was monitored. It was determined that the relationship between muscle contractile distance and force is related by a negative power function.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Use of System Theoretic Process Analysis (STPA) onNovel Tiltrotor Aircraft to Prevent Mode Confusion</title>
<link href="https://hdl.handle.net/1721.1/158856" rel="alternate"/>
<author>
<name>Basnight, Natalie Ann</name>
</author>
<id>https://hdl.handle.net/1721.1/158856</id>
<updated>2025-04-08T04:26:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Use of System Theoretic Process Analysis (STPA) onNovel Tiltrotor Aircraft to Prevent Mode Confusion
Basnight, Natalie Ann
Initiatives are underway to develop tiltrotor and vertical take-off and lift (VTOL) aircraft that enhance commercial and military aviation’s autonomy, capability, and survivability. These designs integrate rotary and fixed-wing elements, introducing distinct safety considerations. These safety concerns are largely due to the differing mental models of operators trained in either rotary or fixed-wing aviation, alongside the rising reliance on autonomy. The traditional hazard analysis techniques (e.g., Fault Tree Analysis and Failure Models and Effects Criticality Analysis) do not adequately account for system component interactions or human factors in complex new aircraft designs. System Theoretic Process Analysis (STPA) is a powerful new hazard analysis technique for novel tiltrotor aircraft that includes their unique safety requirements. It is a top-down system hazard analysis technique that identifies loss scenarios (N. G. Leveson and J. Thomas Mar2018). It satisfies the tasks described in MIL-STD-882E (Department of Defense 2023). This research demonstrates the use of STPA to identify and mitigate potential instances of mode confusion between the operator’s mental model and the autonomy’s decision logic in the uniquely dynamic tilt-rotorcraft environment. Two previous tiltrotor aircraft accidents are analyzed utilizing Causal Analysis based on System Theory (CAST) to help set a framework for the importance of human and machine collaboration in systems. These accidents show a trend in the dangers of aircraft system mismanagement between various controllers. The CAST results for these accidents help provide information about how to prevent these types of incidents in the future, setting the stage for the use of STPA on novel tiltrotor aircraft, as demonstrated in this thesis. STPA can be used before design, implementation, and fielding, allowing for better early design of systems and reducing the cost of later redesign or modification.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Testing of a Hovercraft with Electroaerodynamic Propulsion</title>
<link href="https://hdl.handle.net/1721.1/158851" rel="alternate"/>
<author>
<name>Quiram, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/158851</id>
<updated>2025-04-08T04:31:42Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design and Testing of a Hovercraft with Electroaerodynamic Propulsion
Quiram, Matthew
Electroaerodynamic (EAD) multistaged ducted (MSD) thrusters are a novel solid-state thruster architecture that has been shown to provide order-of-magnitude improvements in thrust density compared to single-stage EAD thrusters. This makes MSD thrusters well-suited for use in EAD hovercraft, where generating sufficient pressure is crucial for hovering. This study explored the feasibility of a wire-to-airfoil corona discharge MSD thruster powered hovercraft through a scaled-down prototype and final design. The hovercraft was tethered to a ground-based power supply and carried a payload mass to simulate having on-board power electronics to limit the scope of the project. The design of an EAD hovercraft involved applying the principles of hovercraft lift to a design optimization that implements the recently developed EAD MSD thruster model. A hovercraft prototype was designed and constructed to validate the models applied during the design phase and to test hovering capabilities without a payload. Using the manufacturing lessons and insights gathered in the prototype testing, a full-scale model was designed and built to hover while having an additional payload capacity that would be representative of a set of power electronics.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston</title>
<link href="https://hdl.handle.net/1721.1/158849" rel="alternate"/>
<author>
<name>Proman, Zachary D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158849</id>
<updated>2025-06-09T15:22:12Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">A Business and Redevelopment Outline for the Re-Use of a Prime Site in South Boston
Proman, Zachary D.

</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Exploration of a Miniaturized Stirling Engine</title>
<link href="https://hdl.handle.net/1721.1/158848" rel="alternate"/>
<author>
<name>Hee, Ryann</name>
</author>
<id>https://hdl.handle.net/1721.1/158848</id>
<updated>2025-04-07T09:04:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Design Exploration of a Miniaturized Stirling Engine
Hee, Ryann
Increased interest in long-term space exploration has increased demand for small yet powerful energy sources, especially for remote and harsh environments where traditional power sources may be impractical. In such scenarios, space probes and high-reliability systems necessitate innovative solutions to meet their growing power and thermal management requirements while maintaining small form factors. Presently, micro power systems fall short of achieving the desired efficiencies for these applications, typically hovering around 2% [1]. Stirling engines, with their proven capability to attain high thermodynamic efficiency (30-40%), offer a promising solution if this efficiency can be maintained in a miniaturized form [2]. This study delves into the design space of a miniaturized Stirling engine with a target input of 2Wth, which could be tailored for small-scale (mesoscale ~cm3 ) high-efficiency power generation or micro-cooling. Previous research has laid the groundwork for understanding the thermodynamics of miniaturized Stirling engines, exposing substantial challenges, including overwhelming parasitic losses at this scale. The current study endeavors to mitigate these losses and explore the path to optimal efficiencies through Simulink modeling. Simulations have demonstrated design spaces capable of producing mechanical efficiencies as high as 14% with a 2Wth input, marking significant progress in addressing the limitations of current micro power systems. The research's innovative approach has significant implications for enabling the power generation required for small space probes, particularly for long durations and need self-sustaining power over extended periods [3], [4]. As the study advances, it holds the promise of developing a physical prototype using the findings from the design space study, which helps push the field forward for future power generation and micro-cooling in small-scale space technology. This thesis aims to map the design space of a miniaturized Stirling engine focusing on mitigating parasitic losses to achieve markedly greater efficiency compared to existing technologies.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Missing Megawatts Problem: Improving Modelling Practices to Prepare for an Uncertain Future</title>
<link href="https://hdl.handle.net/1721.1/158844" rel="alternate"/>
<author>
<name>Bhatt, Nirmal K.</name>
</author>
<id>https://hdl.handle.net/1721.1/158844</id>
<updated>2025-04-07T08:49:38Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Missing Megawatts Problem: Improving Modelling Practices to Prepare for an Uncertain Future
Bhatt, Nirmal K.
Long-term energy system planning is one of the most pressing challenges for the power sector, which must maintain reliability while decarbonizing. Currently, no unified regulatory, modelling, or market framework exists in the United States to facilitate planning in pursuit of a clean and reliable grid. Variable renewable energy (VRE) generation can produce cheap power but they increase the grids exposure to interannual variability in demand and VRE generation. This raises questions about how grid planners will value VRE and clean firm power (such as nuclear power). This thesis evaluates the importance of considering interannual variability and clean firm power in long-term energy system planning. I use GenX, an open-source capacity expansion model, to model the U.S. New England region in 2050 assuming a high degree of electrification and various technology availability and emissions reduction pathways. I find that clean firm power will reduce the cost of decarbonizing the New England grid but that grid planners must consider decades of weather and demand data if they are to make appropriate investments. I also present a novel outputs-based timeseries clustering method which allows models like GenX to optimize grids using longer timeseries of weather and demand data. Based on my work, I recommend that policymakers, grid operators, and market designers establish rigorous standards around energy modelling for long-term planning that includes multiple scenarios and appropriately values technologies such as firm power.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>If These Hills Could Speak</title>
<link href="https://hdl.handle.net/1721.1/158841" rel="alternate"/>
<author>
<name>Bayowa, Tejumola</name>
</author>
<id>https://hdl.handle.net/1721.1/158841</id>
<updated>2025-04-08T04:18:01Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">If These Hills Could Speak
Bayowa, Tejumola
If these hills could speak, what would they reveal, and how would they express it? This central question guides this thesis, which examines three hills in the heart of Ibadan, Southwest Nigeria— each occupied by the ruins of colonial monuments. Before the construction of these structures, the hills served as sanctuaries, providing water, food, and safety. However, under British colonial rule, architecture was utilized to disrupt this harmonious relationship. Over the course of 50 years, three monuments were erected that mark Britain’s colonial imprint on the city: a neoclassical courthouse (1925), built to assert control over the central market; a 60-foot tower (1936), which displaced the surrounding forests; and a theater (1977), built during a time of national struggle for unity and identity. Today, at the foot of these hills, a community has forged a way of life within a broken system. By repurposing and subverting structures in ways their creators never intended, this community embodies a praxis and poiesis of adaptive creativity within the built environment. This process represents a transformative act of pidginization—a collective tactic for repair, resistance, and reappropriation in response to an ongoing, imposed socio-political order. For these hills to speak again, the ruins must be transformed. This thesis begins that process by applying acts of pidginization learned from below to the three ruins. It proposes their conversion through deconstruction and de-monumentalization, with the aim of fostering economic development, ecological restoration, and cultural production in the city.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Hing Travel Agency Fictional Archive of Disappearing Hong Kong</title>
<link href="https://hdl.handle.net/1721.1/158840" rel="alternate"/>
<author>
<name>Wu, Ina</name>
</author>
<id>https://hdl.handle.net/1721.1/158840</id>
<updated>2025-04-07T09:27:37Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">On Hing Travel Agency Fictional Archive of Disappearing Hong Kong
Wu, Ina
Hong Kong, shaped by rapid transformation and precarious land ownership, is a city where erasure defines its urban landscape. Amid this flux, a place I once called home was demolished, prompting the question: “How can one return to a place that no longer exists?” This thesis explores the transformative potential of disappearance, reframing it as a generative force that creates space for imagination, resistance, and continuity. Through On Hing Travel Agency (OHTA), demolished buildings "travel" into fictional worlds, becoming vessels of memory and imagination. Rooted in Hong Kong’s literary tradition—where fiction resists erasure and archives aspirations—the project employs fiction as both a tool of preservation and a site for belonging. Fictional destinations, inspired by Hong Kong novels, such as The Permanent City (1959), The Floating City (1986), and The Vanished Cities (2010), reflect pivotal historical moments while offering pathways to reconcile personal loss and master alternative spatial logics. The project culminates in the Lost Traveler’s Guide to Hong Kong, a publication curating maps, brochures, and layered narratives to immerse travelers in speculative thinking. By bridging the past and future, real and imagined, OHTA is a attempt to demonstrates how fiction can reclaim agency within the politics of disappearance, transforming loss into a catalyst for new narratives and creative engagement. Even in absence, Hong Kong’s disappearing spaces retain their resonance, generating new narratives and underscoring the creative potential of loss.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sweating Details: Labor of “Los Constructores del Valle”</title>
<link href="https://hdl.handle.net/1721.1/158839" rel="alternate"/>
<author>
<name>Andrade, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/158839</id>
<updated>2025-04-07T08:43:24Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Sweating Details: Labor of “Los Constructores del Valle”
Andrade, Gabriel
“You should always be grateful for the work you can find, so make sure you prove you deserve it.”- Commonly heard growing up amongst the Builders of the Valley in Orange, NJ. The necessary attitude that fuels the built environment.&#13;
&#13;
This thesis proposes a dialogical method of tectonics through exploring the embodied experiences of those who physically build the city and its architecture, positioning architectural design as fundamentally tied to the labor that makes buildings possible. It centers on two primary questions: “Who builds this architecture?” and “How does this design impact a builder’s occupational livelihood?”&#13;
&#13;
To challenge professional standards that perpetuate a disconnection between designers and builders, this thesis reconnects me, as a designer, with my educators from Orange, NJ. These individuals—professional construction workers—shaped my earliest understanding of the built environment and how to navigate it socially and professionally. Through this process, learning more about who they are, how they entered construction, and how the work has affected them over the years.&#13;
&#13;
This education with ongoing dialogue pushes towards future opportunities of working together, focusing on designing better for the act of building by prioritizing the physical, mental, and financial longevity of my Educators. The culmination of this research and communication is materialized through four architectural details within a workspace, designed to showcase my Educator’s expertise and affinities as professionals. These details reimagine occupational choreography, opening up for future workflows that think through both lessening and healing the musculoskeletal disorders that many builders face after years of laboring across the tristate area.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Instrument for the Measurement of Soft Material Nonlinear Mechanical Response</title>
<link href="https://hdl.handle.net/1721.1/158836" rel="alternate"/>
<author>
<name>Unikewicz, Brendan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/158836</id>
<updated>2025-04-08T04:52:50Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">An Instrument for the Measurement of Soft Material Nonlinear Mechanical Response
Unikewicz, Brendan M.
Soft material research has seen significant growth in recent years, with emerging applications in robotics, electronics, and healthcare diagnostics where understanding material mechanical response is crucial for precision design. Traditional methods for measuring nonlinear mechanical properties of soft materials require specially sized samples that are extracted from their natural environment to be mounted on the testing instrument. This has been shown to compromise data accuracy and precision in various soft and biological materials. To overcome this, the Volume Controlled Cavity Expansion (VCCE) method was developed. This technique tests soft materials by controlling the formation rate of a liquid cavity inside the materials at the tip of an injection needle, and simultaneously measuring the resisting pressure which describes the material response. Despite VCCE’s early successes, expansion of its application beyond academia has been hindered by cost, size, and expertise. In response to this, the first portable, bench-top instrument utilizing VCCE is presented here. This device, built with affordable, readily available components and open-source software, streamlines VCCE experimentation without sacrificing performance or precision. It is especially suitable for space-limited settings and designed for use by non-experts, promoting widespread adoption. The instrument’s efficacy was demonstrated through testing Polydimethylsiloxane (PDMS) samples of varying stiffness. This study not only validates instrument performance, but also sets the stage for further advancements and broader applications in soft material testing. All data, along with acquisition, control, and post-processing scripts, are made available on GitHub.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Dynamics of Diversity, Equity, and Inclusion Practice Adoption</title>
<link href="https://hdl.handle.net/1721.1/158834" rel="alternate"/>
<author>
<name>Yadama, Aishwarya Pandey</name>
</author>
<id>https://hdl.handle.net/1721.1/158834</id>
<updated>2025-04-07T09:12:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">The Dynamics of Diversity, Equity, and Inclusion Practice Adoption
Yadama, Aishwarya Pandey
Despite the widespread adoption of Diversity, Equity, and Inclusion (DEI) initiatives in corporate America, significant disparities persist in the representation, compensation, and treatment of women and racial minorities. This paper investigates why well-intentioned DEI efforts often fail to achieve their intended outcomes and identifies managerial barriers to progress. This research employs a qualitative dynamic modeling approach to analyze the complexities of DEI practice implementation within organizations. I conducted a scoping review, focusing on longitudinal and experimental designs to identify key mechanisms influencing the outcomes of DEI practices. The interplay between organizational processes and individual cognitive and behavioral responses can be illustrated via reinforcing and balancing feedback loops that I map onto a causal loop diagram, which reveals how DEI initiatives interact with existing organizational processes and cultural dynamics. This paper introduces a dynamic perspective on DEI practice implementation, highlighting the feedback mechanisms that can either hinder or facilitate progress toward diversity goals. The model reveals that certain DEI practices may inadvertently trigger reinforcing loops that perpetuate inequality. By mapping DEI practices and their effects, this study provides a framework for understanding how DEI outcomes can diverge significantly depending on different implementation strategies. It underscores the importance of considering the endogenous feedback effects of DEI initiatives and offers insights into strategic interventions that can disrupt undesirable reinforcing cycles and promote progress toward organizational diversity, equity, and inclusion.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Precisely Loose: Unraveling the Potential of Particles</title>
<link href="https://hdl.handle.net/1721.1/158833" rel="alternate"/>
<author>
<name>Yoon, Jeonghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158833</id>
<updated>2025-04-07T08:25:28Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Precisely Loose: Unraveling the Potential of Particles
Yoon, Jeonghyun
Random, irregular, erratic, arbitrary, unspecifiable, and unpredictable—particles. In a post-extractive future, our reliance on standardized materials, continuously sourced through the exploitation of raw resources, will no longer be sustainable. Instead, architecture will increasingly contend with materials that defy standardization. This thesis focuses on these non-normative materials—particles, encompassing construction demolition debris, manufacturing defects, naturally occurring gravels, and locally sourced mineral waste. Ubiquitous yet underutilized, these materials hold potential not only for use, but also for reuse. However, they are often dismissed as rigid and unpredictable ingredients that require precise manipulation and cumbersome processing in order to achieve predictable results. What kind of architecture could emerge if we embraced the inherent nature of these particles, not as rigid materials to be controlled, but as dynamic, fluid entities? By embracing their uncertainty as a generative design agent, how would design approaches and construction processes transform? This thesis presents a catalogue of precisely loose methods for engaging with particles. These methods offer an alternative design approach that moves beyond the obsession with refinement and control over material behavior. By pouring, pushing, reconfiguring, and containing—in lieu of identifying, cutting, placing, and stacking—this series of interactions explores the potential of plurality, investigating how loosely controlled particles can adapt to collaborative construction processes. In doing so, this thesis redefines architectural material culture rooted in rubble, offering a framework to reimagine our relationship with the irregular, the unpredictable, and the overlooked.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Agent Hybrid Prediction in Autonomous Driving</title>
<link href="https://hdl.handle.net/1721.1/158832" rel="alternate"/>
<author>
<name>Yau, Tiffany Yee Kay</name>
</author>
<id>https://hdl.handle.net/1721.1/158832</id>
<updated>2025-04-07T09:20:29Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Multi-Agent Hybrid Prediction in Autonomous Driving
Yau, Tiffany Yee Kay
In autonomous driving, the hybrid task of predicting both high-level actions and lowlevel trajectories of human behaviour is fundamental to safe downstream decision-making. Much of the existing work in behaviour prediction tackle this problem without sufficiently modelling agent-agent interactions, limiting their ability to capture the full range of possible joint outcomes. Another key challenge in multi-agent prediction is the intractable prediction space that grows exponentially in the number of agents and duration of the prediction horizon. As a result, scalability is a major challenge. This thesis presents two approaches to address these challenges in multi-agent hybrid prediction. In our first approach, we model interactions and address scalability by learning to factor the joint prediction distribution. We observe that agents do not interact with all other agents in the scene, but rather, there are groups that strongly interact. Therefore, we group agents and represent the high-level interaction outcomes of groups with discrete variables. We additionally assume that inter-group interactions are sparse and can be sufficiently represented with a directed acyclic graph. These assumptions enable us to factor the distribution into a product of factors, effectively reducing the prediction space, and providing an order in which to easily sample discrete values. We evaluate the performance of this method on a large-scale autonomous driving dataset and show that it exceeds prior methods in coverage of possible interaction outcomes by 24% to 48% on various multi-agent validation data splits, while maintaining state-of-the-art prediction error. Our second approach represents agents in a traffic scene as a set of concurrent hybrid models and assumes a collision avoidance model of interactions, rather than learning the model from data like the first approach. Our method begins enumeration based on a simpler collision-agnostic prior distribution. Based on our factored representation, we determine the next best assignment to the prior. We extract bounding conflicts to correct the prior and increasingly reduce the error between the distribution used by enumeration and our collision-aware posterior distribution. Our experiments show that enumeration using A* with bounding conflicts (A*BC) is faster than A* and is therefore better at addressing scalability. In terms of prediction metrics, we find that our collision-aware posterior performs worse than the collision-agnostic prior and suggest future directions for improvement.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Environmental Regulation on Data Center Valuation</title>
<link href="https://hdl.handle.net/1721.1/158830" rel="alternate"/>
<author>
<name>Lee, Donghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/158830</id>
<updated>2025-04-07T08:26:29Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Impact of Environmental Regulation on Data Center Valuation
Lee, Donghyun
Artificial intelligence has become one of the defining trends of modern society, with applications spanning virtually every industry. This societal shift has also influenced the real estate landscape. While data centers have existed for decades, it is only in recent years that they have garnered significant attention, demonstrated by their strong rent growth and compressed cap rates.1 Along with the attention over data centers, there also has been extensive research on how data centers impact the environment, such as "Quantifying the Sustainability Impact of Data Center Availability" by Manish Marwah et al. which present how data center power architecture may impact the environment and "The Environmental Footprint of Data Centers in the United States" by Md Abu Bakar Siddik, Arman Shehabi, and Landon Marsto. This research delves into quantifying the environmental impacts of data centers, specifically focusing on carbon and water footprints. However, what remains unexplored is how environmental regulations influence the valuation of data centers as a distinct real estate property type. This thesis examines how data center valuations could be impacted if existing environmental regulations were applied to regions where data centers are concentrated. The findings reveal a complex dynamic: while penalties under these regulations would reduce net operating income (NOI), potentially devaluing these assets, the same regulations would discourage new development, exacerbate the already constrained supply, and ultimately drive-up market rents for these properties. As a result, these opposing forces create ambiguity regarding the net impact of such regulations on data center valuations, with the outcome depending on which force prevails. What is clear, however, is that tenants would bear the brunt of these regulations, as landlords are likely to pass on increased costs through higher rents. On the other hand, while the environmental impacts of data centers and AI applications is critical to achieving sustainability goals, the societal benefits of AI solutions—ranging from advancements in healthcare to increased operational efficiencies—must also be considered. Balancing these competing priorities presents a unique challenge for policymakers and investors, with significant implications for the future of real estate and the digital economy.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examining the Economic Impact of Anti-Warehouse Development Policies in California: A Case Study of the San Diego Market</title>
<link href="https://hdl.handle.net/1721.1/158829" rel="alternate"/>
<author>
<name>Ghasemlou, Peggy</name>
</author>
<id>https://hdl.handle.net/1721.1/158829</id>
<updated>2025-04-07T08:58:10Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Examining the Economic Impact of Anti-Warehouse Development Policies in California: A Case Study of the San Diego Market
Ghasemlou, Peggy
This thesis conducts a detailed examination of the implications of anti-warehouse development policies in San Diego, focusing on their impacts on key economic indicators from 2024 to 2034. The research provides an overview of the U.S. industrial market, addressing crucial topics such as logistics market size, job creation, and the growth of e-commerce, while also exploring the NIMBY phenomenon and its influence on community opposition to developments, including a discussion of Bill 98 and its legislative implications. A specific focus on the industrial market in Southern California reveals important insights into job growth, rental rates, and market dynamics in San Diego. Through a comprehensive analytical approach, the study addresses the effects of development policies by presenting ten distinct scenarios that project delivery volumes, uncovering potential reductions ranging from 10% to 90% compared to a baseline scenario without restrictions. The analysis anticipates vacancy rates and job losses across various years, utilizing the LINEST function for forecasting key market indicators, including asking rents and asset valuations. Additionally, the research highlights the critical importance of logistics categories and decarbonization strategies to meet net-zero goals, as well as contemporary warehouse design trends and transportation innovations. The conclusions drawn from this research emphasize the complexities of balancing community interests with economic growth and sustainability in the region, as well as the broader economic implications of restrictive development policies on San Diego's warehouse industry, which could adversely affect the economic vitality of the warehouse sector.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Markers</title>
<link href="https://hdl.handle.net/1721.1/158828" rel="alternate"/>
<author>
<name>Ortiz, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/158828</id>
<updated>2025-04-08T04:44:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Dynamic Markers
Ortiz, Evan
When I was a child, I was certain that all clouds came from New 	Jersey. After passing through the Lincoln Tunnel, I-95 would gradually ascend, lifting our car to eye level with the billowing clouds emerging from beneath us. These clouds rose from the Meadowlands, a great marsh just two miles west of Manhattan, a landscape that has become defined by the infrastructure that occupies it. Nearly equal in land mass and opportunity to Manhattan, this landscape managed to resist holistic transformation due to our inability to control its water. Rather than becoming a prosperous site for agriculture in the 19th century, or the next metropolis in the early 20th, the Meadowlands fell out of focus and became a site to absorb the infrastructural networks needed to uphold rapid development at its edges.&#13;
&#13;
The Meadowlands was sutured shut by the networks interlaced through it in an attempt to erase the failures of the past. Utilizing this landscape as an urban sponge neglected that the marsh hosted a series of ecological infrastructures of its own. The Meadowlands' soft, uncertain ground once managed variations in the water level, but the draining of the ground that came with development reduced its capacity, making pump stations essential for managing water in inhabited areas. Unlike the other forms of infrastructure in the Meadowlands, the presence of the pump station is subdued, its invisibility upholds the illusion that the developments within this landscape are not threatened by their surroundings. However, steady sea level rise and an increase in storm surges have caused these pumps to fail, pulling the veil on their existence and more importantly, the essential role they play in our continued occupation of this landscape. The urgent need to increase the capacity of the pump station provides an opportunity to reconsider their agenda.&#13;
&#13;
This thesis proposes the Dynamic Marker, a new type of infrastructure that redefines the relationship between human systems and ecological flows. Grafted onto existing pump stations in the Meadowlands, it releases water as mist from 800 feet in the air, transforming the hidden mechanics of water management into a moment of wonder. The Dynamic Marker fosters microclimates and ecological connections, transforming infrastructure into a dynamic process that evolves with its surroundings. Over time, it becomes both a memorial to the marsh and a provocation for the future, inviting a rethinking of infrastructure as a participatory and adaptive force that responds to its surrounding ecology.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Green Herrings in a Yellow Room: A Counter Production of The Yellow Wallpaper</title>
<link href="https://hdl.handle.net/1721.1/158824" rel="alternate"/>
<author>
<name>Aulgur, Leanah Sloan</name>
</author>
<id>https://hdl.handle.net/1721.1/158824</id>
<updated>2025-04-07T09:04:12Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Green Herrings in a Yellow Room: A Counter Production of The Yellow Wallpaper
Aulgur, Leanah Sloan
Charlotte Perkins Gilman’s The Yellow Wallpaper is a designer’s work of critical fabulation. Published in 1892, the short story follows an unnamed woman prescribed a “rest cure” by her husband, John. Confined to a room wrapped in gothic yellow wallpaper, the narrator becomes obsessed with its patterns. As her mind deteriorates, she sees a woman trapped behind the paper. This production reimagines Charlotte’s bedroom as not yellow, but green—a rich, vibrant green laced with the medium responsible for its provocative coloration: arsenic. The toxic pigment, invented in the late 18th century, induces bodily ailments, mental instability, and even death when used in textiles. Interiors threatened tenants with toxins as this green spread through 19th-century Europe before reaching New England and our narrator. Though known as an author and suffragette, Charlotte was first a designer. As a student in the inaugural class of the Rhode Island School of Design, she studied the arts just miles from the ports where the green pigment began its early residence. Her writing draws from arsenic publications, her scenes mimic medical case studies, and archives suggest she was aware of these toxic walls. This theatrical table reading positions the authoring of The Yellow Wallpaper within the simultaneous stories of the arsenic wallpaper. Why does the author mimic material traces of the green while redirecting her readers to the yellow? When does the color transition from literal to abstract? This work recontextualizes the foundational feminist text by unfabulating the story through design—questioning Charlotte’s literary misdirections and the public discourse surrounding the toxic color.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Marketplace Multiculturalism</title>
<link href="https://hdl.handle.net/1721.1/158823" rel="alternate"/>
<author>
<name>Chowdhary, Harris</name>
</author>
<id>https://hdl.handle.net/1721.1/158823</id>
<updated>2025-04-07T09:21:27Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Marketplace Multiculturalism
Chowdhary, Harris
Picture Texas. No longer simply cowboys, footballs, and firearms, this land today is sustained by a daily choreography of cross-border commerce, managed by entertainment media turned handheld surveillance, and peppered with enclaves of immigrants from the world over. A contact zone where logistical and legislative apparati warp to serve consumer comfort, Texas today is the world tomorrow: forget the Alamo, it’s highways, tax-incentives, and backyard barbecue on the 21st century frontier. This thesis responds to a call for roadside service stations along a planned international tourist corridor in the Texas-Mexico borderlands with six interventions: a panoramic viewing tower disguised as a billboard, a sunken stadium for athletic agonism, a photovoltaic drive-in charging cinema, an international culinary incubator, a showroom for automated fulfilment, and a customs and border patrol welcome center. These structures are testing grounds for modes of relation and value exchange that edge beyond the outdated positivisms of globalization. They ask how architecture might produce new possibilities and publics by working within and taking advantage of contemporary systems of control. As tourist destinations, the stops suggest the nation’s true mythos lies not in static symbols but in choreographies of transaction and contact. Articulating in built form the dynamic processes that define a territory of sprawl, this proposal suggests that Texas’s most authentic monuments are the stops we make along the way.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improved Complexity Analysis for the Proximal Bundle Algorithm Under a Novel Perspective</title>
<link href="https://hdl.handle.net/1721.1/158820" rel="alternate"/>
<author>
<name>Fersztand, David</name>
</author>
<id>https://hdl.handle.net/1721.1/158820</id>
<updated>2025-04-07T08:35:40Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Improved Complexity Analysis for the Proximal Bundle Algorithm Under a Novel Perspective
Fersztand, David
The proximal bundle algorithm (PBA) is a fundamental and computationally effective algorithm for solving optimization problems with non-smooth components. We investigate its convergence rate in two settings. We first focus on a composite setting where one function is smooth and the other is piecewise linear. We interpret a sequence of null steps of the PBA as a Frank-Wolfe algorithm on the Moreau envelope of the dual problem. In light of this correspondence, we first extend the linear convergence of Kelley's method on convex piecewise linear functions from the positive homogeneous to the general case. Building on this result, we propose a novel complexity analysis of PBA and derive a O (epsilon^-4/5) iteration complexity, improving upon the best known O (epsilon^-2) guarantee. This approach also unveils new insights on bundle management. We then present the first variant of the PBA for smooth objectives, achieving an accelerated convergence rate of O(epsilon^-1/2 log(epsilon^-1)), where epsilon is the desired accuracy. Our approach addresses an open question regarding the convergence guarantee of the PBA, which was previously posed in two recent papers. We interpret the PBA as a proximal point algorithm and base our proposed algorithm on an accelerated inexact proximal point scheme. Our variant introduces a novel null step test and oracle while maintaining the core structure of the original algorithm. The newly proposed oracle substitutes the traditional cutting planes with a smooth lower approximation of the true function. We show that this smooth interpolating lower model can be computed as a convex quadratic program. We finally show that Nesterov acceleration can be effectively applied when the objective is the sum of a smooth function and a piecewise linear one.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Introducing Technical Design Elements in Makerspace&#13;
Trainings</title>
<link href="https://hdl.handle.net/1721.1/158817" rel="alternate"/>
<author>
<name>Barakat, Layal A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158817</id>
<updated>2025-04-07T08:40:35Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Impact of Introducing Technical Design Elements in Makerspace&#13;
Trainings
Barakat, Layal A.
Makerspaces are used as a tool in higher education to support curricular, hands-on projects and encourage student extracurricular and personal projects. Because access to making is more self-driven, there is a gap between what makerspace trainings teach students and what students are expected to know by the time they reach capstone courses in engineering. To test the effects of introducing a technical makerspace training to students, several steps were taken. First, known barriers to making were explored and organized into categories. Second, Design Expertise was defined as a means to combat these barriers: it is a combination of (1) knowledge, (2) skill, (3) perspective, and (4) motivation. Third, a rigorous framework, the Design-Fabrication-Performance (DFP) matrix was created to break down design expertise into manageable chunks. Next, existing makerspace trainings at MIT were characterized using the DFP matrix. Afterwards, the DFP matrix was used to design a new, experimental training which would incorporate engineering design thinking and expertise with the typical makerspace machine training structure. Finally, 23 student participants were recruited, surveyed using a Likert scale (1 = strongly disagree, 5 = strongly agree), and interviewed to understand the impact of the training on participant perspectives, engineering identity, and maker motivation. Initial results suggest that student self-efficacy increases as a result of the training, This outcome is shown by the highest average differential of all survey responses (M = 0.78, SD = 0.85) for question 15: “I am confident in my ability to use GIR level knowledge to design and make things that perform as intended”. The maker training reinforced the motivation to make things for a majority of students, with the average score for the associated question being 4.48 (SD: 0.85). The training also positively impacted some traditionally marginalized groups in STEM. For the statement "I feel comfortable in engineering at MIT", women averaged 3.27 and men 3.90 before the training. The average differentials in the post- and pretraining scores to this question for these groups were 0.4 and 0.91 respectively. The training also appears to level playing field for students with less advanced backgrounds in engineering and science. For the question “I am confident in my ability to solve GIR level problems on my own”, students with parents with graduate degrees or higher averaged 4.44 before the training, while those with parents with undergraduate degrees or lower averaged 3.57. The average differentials are 0.22 and 0.64 respectively. Although students saw the value in modeling systems before design and fabrication, several questions demonstrated that students found modeling to be tedious and preferred to test and iterate on their designs in the makerspace; further work is needed to eliminate barriers to sustain student interest and participation in the long term. A longitudinal study following these students would also be needed to reveal long term outcomes such as STEM retention and long-term makerspace usage.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabrication and Characterization of Horizontally Aligned&#13;
Carbon Nanotube Thermoplastic Bulk Nanocomposite&#13;
Laminates</title>
<link href="https://hdl.handle.net/1721.1/158815" rel="alternate"/>
<author>
<name>Lin, Yuying</name>
</author>
<id>https://hdl.handle.net/1721.1/158815</id>
<updated>2025-04-07T09:14:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Fabrication and Characterization of Horizontally Aligned&#13;
Carbon Nanotube Thermoplastic Bulk Nanocomposite&#13;
Laminates
Lin, Yuying
Carbon nanotubes (CNTs) have advantaged mass-specific mechanical properties and excellent thermal and electrical conductivity, making them an attractive reinforcement for composite systems. Due to an increasing need for more sustainable materials, incorporation of CNTs into thermoplastic matrices presents a promising solution for recyclable and repairable polymer nanocomposites (PNCs). This thesis presents an approach to fabricating and characterizing thermoplastic PNCs that incorporate ultra-high volume fractions of horizontally-aligned carbon nanotubes (HA-CNTs). An MIT-developed bulk nanocomposite laminating (BNL) process was adapted to fabricate multi-ply, unidirectional composites with poly(methyl methacrylate) (PMMA) and acrylonitrile butadiene styrene (ABS) matrices. For the HA-CNT/PMMA system, the BNL process was tailored to fabricate 4-ply and 8-ply laminates with fiber volume fraction v_f &gt; 45 vol.%, using a 9 wt.% PMMA in anisole solution. Through characterization via X-ray microcomputed tomography (µCT), scanning electron micrography (SEM), thermogravimetric analysis (TGA), Fourier transform infrared (FTIR) spectroscopy, and polarized Raman spectroscopy, HA-CNT/PMMA laminates were shown to be free of micro-scale voids with weak or non-existent process-structure interactions, i.e., the CNTs had negligible effect on the polymer structure. TGA and IR helped demonstrate that the BNL process did not lead to decomposition or chemical changes to neat PMMA, and FTIR also revealed that the fabrication process did not induce covalent bonding between CNTs and PMMA. The crystalline behavior of PMMA was studied via dynamic scanning calorimetry (DSC) as well as X-ray diffraction (XRD), which demonstrated that BNL processing temporarily lowers neat PMMA glass transition temperature T_g by 4 ◦C with no permanent change after removal of thermal history. However, CNT inclusion leads to higher laminate T_g by 11 ◦C as shown through both DSC and dynamic mechanical analysis (DMA), which can be explained by CNT constraints on polymer chain movement as opposed to any crystallinity changes in the PMMA. Storage modulus of 8-ply HA-CNT/PMMA laminates was shown to be more than 600% of neat PMMA via DMA, while a decrease in tan(δ) of the laminate compared to neat PMMA indicates an increase in elastic behavior due to CNT inclusion. 4-ply laminates were subjected to a minimum radius of curvature test showing a ∼ 50% increase in yield strain compared to neat PMMA. Electrical properties of 4-ply HA-CNT/PMMA laminates were measured via 4-point probe testing, which demonstrated good Ohmic contact between CNTs, with conductivity of ∼ 2 × 10⁴ S m⁻¹ and anisotropy ratio of 1.2. A preliminary investigation was completed to evaluate the feasibility of using the BNL process for the HA-CNT/ABS system. Uniform suspensions of ABS in anisole were developed to use the BNL polymer infiltration method of spin-coating and vacuum-assisted infusion. It was shown that the nature of the ABS suspension led to uneven polymer distribution over the HA-CNTs. This work has demonstrated the successful incorporation of high volume fractions of aligned CNTs into PMMA thermoplastic matrices as well as the electrical conductivity of such composites, opening an avenue to the development of other high v_f thermoplastic PNCs and exploration into additional multifunctional capabilities.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remembering Energic Connectivities: Appropriate Technology and Domestic Infrastructure in the Energy Crisis</title>
<link href="https://hdl.handle.net/1721.1/158811" rel="alternate"/>
<author>
<name>Adornetto, Turner Day</name>
</author>
<id>https://hdl.handle.net/1721.1/158811</id>
<updated>2025-04-07T09:09:32Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Remembering Energic Connectivities: Appropriate Technology and Domestic Infrastructure in the Energy Crisis
Adornetto, Turner Day
The electric grid is a large, complex machine. And yet, it represents but one, narrow framework for energic relations. Visions for just and sustainable futures – for social and ecological repair – should wander further afield. One place they could go is home. In this essay, the Appropriate Technology Small Grants Program, an oft-forgotten chapter of U.S. energy history, shows us how small-scale, place-based inventors transformed homes and neighborhoods into converters and conductors of nearby flows and potentials. At the height of the energy crisis of the 1970s, these inventors pursued a distributed solution to shortage. Along the way, they re-wired the material and conceptual strictures of the modern dwelling and broke into a vast reserve of lowcost, renewable power. Home, they showed, was a workshop to understand and design energic connectivities. But tracing the effects of home-based appropriate technology leads us somewhere else – to the frontiers of energy extraction, where social justice activists proved that small-scale, place-based energy systems could replace unjust mines and dams. What emerged, then, through renewed attention to the possibilities for home and energy, was a powerful counter to the logics of sacrifice at both ends of the energy continuum. Today, as we chart our own response to crisis, it helps to remember how others tried to create solidarities and resist tradeoffs with small-scale, place-based infrastructures. We can, I think, do more with energy.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mining Multifaceted Customer Opinions from Online Reviews</title>
<link href="https://hdl.handle.net/1721.1/158810" rel="alternate"/>
<author>
<name>Mao, Chengfeng</name>
</author>
<id>https://hdl.handle.net/1721.1/158810</id>
<updated>2025-04-07T08:44:59Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Mining Multifaceted Customer Opinions from Online Reviews
Mao, Chengfeng
Online reviews are a valuable source for studying customer needs and preferences. Previous studies focus on extracting a set of a priori defined constructs such as product attribute perception or explicit customer needs from reviews. Such a priori focus circumvents the limitations of certain natural language processing algorithms but discards valuable information in reviews that are not in the scope of the predefined construct. This study proposes a new method of extracting customer opinions and opinion targets from reviews with the Aspect Sentiment Triplet Extraction (ASTE) algorithm and then identifying theoretical constructs critical for product development with a posteriori interpretation method. We demonstrate the value of our proposed method by identifying granular opinion targets and expressions to find infrequent but important phenomena such as user innovations and delights.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geographies of Selective Surveillance: Analyzing the Lived Experiences of Street-Level Trans Sex Workers and Muslims in India through the Matrix of Domination</title>
<link href="https://hdl.handle.net/1721.1/158809" rel="alternate"/>
<author>
<name>Radhakrishnan, Radhika</name>
</author>
<id>https://hdl.handle.net/1721.1/158809</id>
<updated>2025-04-07T09:17:21Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Geographies of Selective Surveillance: Analyzing the Lived Experiences of Street-Level Trans Sex Workers and Muslims in India through the Matrix of Domination
Radhakrishnan, Radhika
In this paper, I present a study of public and private CCTV surveillance of urban public spaces in India, which I term as ‘geographies of selective surveillance’ — areas where state power is discretionarily exercised and abused, and the presence of the state is experienced principally through police pickets and everyday violence unleashed on marginal occupants, rather than by access to civic amenities and systems of justice. I analyze these experiences of surveillance from the standpoint (Harding, 1992) of minoritized communities of street-level trans sex workers in Kolkata and Muslims in Mumbai. I then situate these experiences within the Matrix of Domination (Collins, 1990), a theoretical framework that explains how systems of power are configured. Defining empowerment as the power to gain control of and/or benefit from a scenario by weakening the Matrix of Domination, I analyze the structural determinants that make surveillance empowering or disempowering for these communities. I find that on the one hand, surveillance can be an empowering tool for minoritized communities as evidence of harm and innocence in cases of false accusations or when police officials typically refuse to believe their experiences due to discriminatory attitudes. On the other hand, surveillance also offers new opportunities for the private exploitation of the instruments of state power through corruption as well as community-based moral policing to be done with greater success and efficiency. I argue that what ultimately determines how surveillance is experienced is not laws and policies, but rather how power is discretionarily exercised on the ground, refracted through the influence of cultural and political beliefs, and discourse.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commodifying and Consuming Endocrine Drugs in Republican China (1920s–1940s)</title>
<link href="https://hdl.handle.net/1721.1/158808" rel="alternate"/>
<author>
<name>Wang, Thelma Yuanzhi</name>
</author>
<id>https://hdl.handle.net/1721.1/158808</id>
<updated>2025-04-07T08:58:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Commodifying and Consuming Endocrine Drugs in Republican China (1920s–1940s)
Wang, Thelma Yuanzhi
Since the introduction of hormone pharmaceuticals into China during the early twentieth century, these substances became objects of fascination for a growing urban elite class. Drawing from newspapers, medical journals, and advertisements, this article examines the unique trajectories of hormone medicine in China. In conversation with previous scholarship on the dynamics of advertising and consuming hormones in China, this article examines specifically the discourses around the production and science of hormones. The circulation of hormones was informed by ideas of traditional Chinese medical cosmologies and enrolled in a nationalist movement encouraging the consumption of hormones produced by emerging Chinese medical entrepreneurs. This article provides a case study in a postcolonial context that problematizes historiographies depicting a linear transition of global hormone science from backwards to scientific, from traditional to modern.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling AI Copilots for Engineering Design With Parametric, Graph, And Component Inputs</title>
<link href="https://hdl.handle.net/1721.1/158805" rel="alternate"/>
<author>
<name>Zhou, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/158805</id>
<updated>2025-04-07T08:25:08Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Enabling AI Copilots for Engineering Design With Parametric, Graph, And Component Inputs
Zhou, Rui
Engineering design demands the synthesis of multimodal and often incomplete data—ranging from detailed parametric specifications, assembly graphs, visual references, and textual descriptions. Despite growing interest in generative models for design ideation and exploration, state-of-the-art approaches struggle with incomplete inputs, lack of support for modalities other than text and image, and limited controllability. This thesis addresses these gaps by unifying two complementary advances:&#13;
&#13;
First, we introduce a graph-guided diffusion approach for parametric data completion. By coupling Graph Attention Networks with a diffusion-based imputation mechanism, our method acts as a highly accurate and creative design auto-completion system for incomplete partial designs. On a dataset of 12,500 bicycles, this design imputation framework achieves a root mean square error (RMSE) of approximately 0.92 on numerical features and an error rate of around 0.18 for categorical attributes, outperforming both classical imputation methods such as MissForest, hotDeck, PPCA and advanced diffusion-based baselines such as TabCSDI. Moreover, it achives a Diversity Score of 3.10, surpassing all baselines, illustrating that the imputation process transforms incomplete data into multiple creative designs.&#13;
&#13;
Second, we develop a multimodal control architecture that can extend foundation models to condition their generation processes with all or a subset of parametric inputs, assembly graphs, component images, and textual constraints. This model tremendously enhances both the controllability and precision of the generation process of foundational generative models, enabling controlling modalities that were not possible before. We first show that our model excels at tasks that state-of-the-art models struggle on. We further validate the performance of our model with surrogate models that investigate individual features. Our model achieves 95% or greater R^2 scores on different continuous parameters. Further, we show that our model is able to generate creative and novel designs while maintaining a high level of precision. This enables engineers to guide generative outputs along precise dimensional, aesthetic, and functional targets. Across numerous trials of different settings, we observe that our pipeline robustly fuses tabular parametric information, assembly graphs, and reference component images to produce results aligned with both specification precision and creativity. &#13;
&#13;
Together, these contributions establish a coherent framework for AI-augmented design exploration. By viewing missing parameters as an opportunity for data-driven design autocompletion and by tightly integrating multimodal control over foundation models, this work elevates generative AI from a niche conceptual tool to a reliable design copilot. The implications of this thesis are profound: we show the possibilities and the pathways to AI copilot systems that can reduce data bottlenecks, broaden design spaces, and offer more thorough, constraint-adherent design candidates. As engineering problems grow in complexity and scale, the synergy of high-fidelity parametric imputation and multimodal control promises to accelerate innovation, cut development cycles, and guide human designers toward more inventive and manufacturable solutions.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning from Past Market Outcomes: Evidence from the Music Industry</title>
<link href="https://hdl.handle.net/1721.1/158804" rel="alternate"/>
<author>
<name>Du, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/158804</id>
<updated>2025-04-08T04:42:56Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Learning from Past Market Outcomes: Evidence from the Music Industry
Du, Jason
We leverage unique features of music albums to investigate how musicians learn from current products when developing new products. We find that songs on a musician’s next album tend to be more similar to the songs that are more successful on that musician’s current album. This effect is stronger when the musician has less experience, and when the song on the current album is more novel (for that musician). Our findings suggest that musicians learn from the success of previous songs when developing new songs, and that learning is stronger if the musician has more need to learn, and when the song contains more new information.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shut Up and Dribble? Exploring the Real Estate Strategies and Trends of NBA Teams</title>
<link href="https://hdl.handle.net/1721.1/158797" rel="alternate"/>
<author>
<name>Nguyen, Viet</name>
</author>
<id>https://hdl.handle.net/1721.1/158797</id>
<updated>2025-04-07T09:06:30Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Shut Up and Dribble? Exploring the Real Estate Strategies and Trends of NBA Teams
Nguyen, Viet
NBA teams have always had to think about real estate through one certain lenses: the arena they play their 41 home games in (plus any subsequent playoff games). But now, NBA teams have evolved past only just thinking about the arena. Teams have increasingly gotten involved in real estate development. This thesis seeks to explore the impact of real estate as a revenue driver for NBA teams, trends observed, and strategic decisions that teams must consider. This thesis will explore current real estate activities of all 30 NBA teams and will examine the choices that teams must make regarding arenas, real estate development, and practice facilities. The findings will help teams and municipalities understand best practices for team-driven real estate, and how strategies can vary team by team based on their situations.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating RAD Conversions: Suggestions for Public Housing Rehabilitation</title>
<link href="https://hdl.handle.net/1721.1/158788" rel="alternate"/>
<author>
<name>Yan, Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/158788</id>
<updated>2025-04-08T04:38:17Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Navigating RAD Conversions: Suggestions for Public Housing Rehabilitation
Yan, Yu
Public housing in the United States, a critical resource for nearly 1.7 million residents, faces significant challenges due to aging infrastructure and chronic operating funding shortfalls. The Rental Assistance Demonstration (RAD) program, authorized by Congress in 2012, aims to address these issues by leveraging private financing to rehabilitate and modernize public housing properties. Although the RAD program has been around for more than a decade and leveraged over $18.5 billion of construction investments, close to 75% of the more than 2500 eligible local PHAs are yet to benefit from it. This thesis examines the evolution of RAD programs, including the two newer tools, RAD/Section 18 Blend and Faircloth-to-RAD, and their adoption by public housing authorities (PHAs).&#13;
The research incorporates a review of HUD program and policies, RAD implementation data, and interviews with industry practitioners, including PHAs, developers, and consultants, to understand the hurdles preventing the adoption of the program and the characteristics of successfully structured projects. This thesis offers insights into how specific strategies are used to overcome the hurdles and provides practical recommendations for PHAs seeking to leverage RAD for public housing preservation and development. Key findings highlight the importance of utilizing available funding sources to achieve financial feasibility and enhancing organizational skills and capacity.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Systems Architecture and the EVDT Framework&#13;
for Monitoring Methane Emissions in Rio de Janeiro</title>
<link href="https://hdl.handle.net/1721.1/158786" rel="alternate"/>
<author>
<name>Ajisafe Jr., Frederick Henry Oladimeji</name>
</author>
<id>https://hdl.handle.net/1721.1/158786</id>
<updated>2025-04-08T04:31:52Z</updated>
<published>2025-02-01T00:00:00Z</published>
<summary type="text">Using Systems Architecture and the EVDT Framework&#13;
for Monitoring Methane Emissions in Rio de Janeiro
Ajisafe Jr., Frederick Henry Oladimeji
Methane is a powerful greenhouse gas that has important implications for climate change. Over the past decade, satellites have rapidly improved their ability to detect this gas from above the atmosphere. This Thesis uses two Systems Engineering frameworks, Systems Architecture and EVDT, to examine a case study of methane monitoring in Rio de Janeiro, Brazil. Data from one of these novel satellite systems, GHGSat, is taken over the Seropédica landfill near the city, and compared to Rio’s own IPCC- and GPC-derived greenhouse gas inventory. This is followed by a participant observation in the summer of 2024 involving interviews, discussions, and site visits. A near-doubling of methane was observed over Seropédica, raising questions about the cause of this increase. The direct engagement with Stakeholders provided by this study contributes to a literature gap in satellite monitoring of urban landfills in southeastern Brazil.
</summary>
<dc:date>2025-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Sparse Representations for Efficient Planning&#13;
in Uncertain Environments</title>
<link href="https://hdl.handle.net/1721.1/158518" rel="alternate"/>
<author>
<name>Veys, Yasmin</name>
</author>
<id>https://hdl.handle.net/1721.1/158518</id>
<updated>2025-04-07T09:19:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Designing Sparse Representations for Efficient Planning&#13;
in Uncertain Environments
Veys, Yasmin
We would like to enable robots to navigate efficiently in large, outdoor environments, where the traversabilities of many regions are unknown prior to planning. If we reason about the uncertainty in the environment instead of assuming that all unknown space is free to move through, we can generate policies that result in, on average, more efficient navigation. However, designing models that enable intelligent and efficient reasoning about environmental uncertainty is challenging. We would like our model to capture the underlying navigation problem and accurately represent the relevant uncertainty, yet remain as sparse as possible, so that planning remains tractable. Higher model expressiveness improves plan quality but reduces computational efficiency in planning, whereas higher model sparsity improves efficiency at the cost of plan quality. Balancing model expressiveness and model sparsity, thus, is crucial for generating high quality plans efficiently. In this thesis, we describe several useful models for planning under uncertainty and justify our decision to use weighted stochastic graphs with probabilistically traversable edges. We then present a novel method of efficiently generating sparse stochastic graphs given coarse information derived from overhead images of our environments. We test our approach in several simulated environments, demonstrating that our graphs effectively trade off between plan quality and planning efficiency for uncertainty-aware agents navigating in the graph. We then deploy our algorithms in a real-world environment on real-world hardware for single-agent and multi-agent teams. We discuss the challenges associated with using our approach in the field and the implications of our model assumptions not matching the real world. Finally, we present preliminary results for adding cost uncertainty to our graph-based representation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Power Efficient Analog Front End for Continuous&#13;
Ultrasound Imaging of the Bladder</title>
<link href="https://hdl.handle.net/1721.1/158517" rel="alternate"/>
<author>
<name>Manohara, Mohith</name>
</author>
<id>https://hdl.handle.net/1721.1/158517</id>
<updated>2025-04-07T08:30:15Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Power Efficient Analog Front End for Continuous&#13;
Ultrasound Imaging of the Bladder
Manohara, Mohith
Continuous bladder monitoring is important for the monitoring of bedridden patients. One method to continuously monitor the bladder is to capture ultrasound images and use machine learning processing to measure the bladder volume from these images. Circuits for implementing these functions can be integrated onto a wearable device, and each of these functions can be integrated onto a single chip. In this thesis, we analyze ultrasound imaging in the context of the bladder to come up with algorithms and hardware to perform continuous bladder monitoring. We first assemble a discrete setup which can form ultrasound images. Using this setup, we describe a new algorithm for generating an ultrasound image by to power gate the hardware during the imaging process to save additional power when capturing the image. We combine these concepts into a single Analog Front End (AFE) chip that can capture images in a power efficient manner.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantum free games</title>
<link href="https://hdl.handle.net/1721.1/158516" rel="alternate"/>
<author>
<name>Zhang, Tina</name>
</author>
<id>https://hdl.handle.net/1721.1/158516</id>
<updated>2025-04-07T08:30:02Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Quantum free games
Zhang, Tina
The complexity of free games with two or more classical players was essentially settled by Aaronson, Impagliazzo, and Moshkovitz [AIM14]. In the quantum world, there are two complexity classes that can be considered quantum analogues of classical free games: (1) AM⇤, the multiprover interactive proof class corresponding to free games with entangled players, and, somewhat less obviously, (2) BellQMA(2), the class of quantum Merlin-Arthur proof systems with two unentangled Merlins, whose proof states are separately measured by Arthur. In this work, we make significant progress towards a tight characterization of both of these classes. &#13;
1. We show a BellQMA(2) protocol for 3SAT on n variables, where the total amount of communication is Õ(√n). This answers an open question of Chen and Drucker [CD10] and also shows, conditional on ETH, that the algorithm of Brandao, Christandl and Yard [BCY10] for optimizing ˜ over separable states is tight up to logarithmic factors. &#13;
2. We show that AM*[ⁿprovers = 2, q = O(1), a = poly log(n)] = RE, i.e. that free entangled games with constant-sized questions are as powerful as general entangled games. (In contrast, [AIM14] shows that classical free games are much weaker than general classical games.) We show this using a question “hyper-compression” theorem that iteratively applies the introspection technique of Ji et al. [JNV⁺20]. Our result is a significant improvement over the headline result of Ji et al., whose MIP⇤ protocol for the halting problem has poly(n)-sized questions and answers. &#13;
3. By the same techniques, we obtain a zero-gap AM* protocol for a P2 complete language with constant-size questions and almost logarithmically (O(log n · log* n)) large answers, improving on the headline result of Mousavi, Nezhadi and Yuen [MNY21]. &#13;
4. Using a connection to the nonuniform complexity of the halting problem we show that any MIP* protocol for RE requires W(log n) bits of communication. It follows that our results in item 3 are optimal up to an O(log* n) factor, and that the gapless compression theorems of [MNY21] are asymptotically optimal. We conjecture that these bounds can be saturated in the gapped case as well.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Navigation of Unknown Environments with Distant Visual Cues</title>
<link href="https://hdl.handle.net/1721.1/158514" rel="alternate"/>
<author>
<name>Fahnestock, Ethan Kendall</name>
</author>
<id>https://hdl.handle.net/1721.1/158514</id>
<updated>2025-04-07T09:13:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Guiding Navigation of Unknown Environments with Distant Visual Cues
Fahnestock, Ethan Kendall
While navigating unknown environments, robots rely primarily on proximate features for guidance in decision making such as depth information from lidar or stereo to build a costmap, or local semantic information from images. The limited range over which these features can be used can result in poor robot behavior when assumptions made by motion planning about the cost of the map beyond the range of proximate features misguide the robot. Integrating “far-field” image features that originate beyond these proximate features into the mapping pipeline has the promise of enabling more intelligent and aware navigation through unknown terrain. To navigate with far-field features, key challenges must be overcome. As far-field features are typically too distant to localize precisely they are difficult to place in a map. Additionally, the large distance between the robot and these features makes connecting these features to their navigation implications more challenging. In this thesis we propose FITAM, an approach that learns to use far-field features to predict navigation costs to guide navigation through unknown environments from previous experience in a self-supervised manner. Unlike previous work, our approach does not rely on flat ground plane assumptions or range sensors to localize observations. We demonstrate the benefits of our approach through simulated trials and real-world deployment on a Clearpath Robotics Warthog navigating through a forest environment.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Visible-Light Liquid-Crystal-Based Modulators and&#13;
Grating-Based Antennas</title>
<link href="https://hdl.handle.net/1721.1/158513" rel="alternate"/>
<author>
<name>Garcia Coleto, Andres</name>
</author>
<id>https://hdl.handle.net/1721.1/158513</id>
<updated>2025-04-07T08:27:26Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Integrated Visible-Light Liquid-Crystal-Based Modulators and&#13;
Grating-Based Antennas
Garcia Coleto, Andres
Current developments in integrated visible-light photonics have led to advancements in applications such as augmented-reality displays and quantum systems. However, the development of crucial integrated-photonics devices such as integrated gratingbased antennas and integrated optical modulators has predominantly focused on the infrared spectrum, leaving a gap in visible-light technologies. This thesis addresses this gap by designing and experimentally demonstrating integrated visible-light liquidcrystal-based (LC-based) modulators and grating-based antennas. First, we provide a thorough design guide for integrated visible-light grating-based antennas and experimentally demonstrate five antennas with varying advanced capabilities, including the first visible-light unidirectionally-emitting grating-based antennas for integrated optical phased arrays (OPAs), facilitating the use of integrated OPAs for new visible-light applications. Second, we discuss the fabrication processes, considerations, and evaluation techniques for successful packaging of integrated LC modulators, supporting the broader integration of LC into silicon-photonics platforms, enabling more compact and efficient on-chip modulation. Third, we experimentally demonstrate the first integrated visible-light LC-based variable-tap amplitude modulators, enabling a compact and low-power solution to integrated visible-light amplitude modulation for high-density integrated visible-light systems. Fourth, we experimentally demonstrate the first 300-mm wafer-scale platform and fabrication process that results in mechanically-flexible photonic wafers and chips, enabling the field of integrated photonics to advance into new application areas that require flexible photonic chips.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contact Free Monitoring of Cell Density in a Bioreactor with Magnetic Resonance Relaxometry</title>
<link href="https://hdl.handle.net/1721.1/158512" rel="alternate"/>
<author>
<name>Gaensbauer, Hans</name>
</author>
<id>https://hdl.handle.net/1721.1/158512</id>
<updated>2025-04-07T08:24:50Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Contact Free Monitoring of Cell Density in a Bioreactor with Magnetic Resonance Relaxometry
Gaensbauer, Hans
Frequent, low-latency measurements of bioreactor culture growth are critical for achieving maximum culture efficiency and productivity. Typical cell density and viability measurements are made by removing a sample from the culture, but this approach is both slow and unsuitable for small culture volumes that cannot support frequent destructive sampling. In this work, magnetic resonance relaxometry measurements taken through the walls of the bioreactor tubing are used to monitor the cell density in near real-time. Using intracellular iron as the marker, the system detects variations in cell density in minutes, enabling rapid intervention to save the culture that would be impossible with the once-daily measurements taken by a traditional sampling-based culture analysis system. Given the biochemical importance of intracellular iron, these measurements have the potential to provide phenotypic information on cells without disrupting the bioreactor culture.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational and Statistical Detection of High-Dimensional Latent Space Structure in Random Networks</title>
<link href="https://hdl.handle.net/1721.1/158510" rel="alternate"/>
<author>
<name>Bangachev, Kiril</name>
</author>
<id>https://hdl.handle.net/1721.1/158510</id>
<updated>2025-04-07T08:44:36Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Computational and Statistical Detection of High-Dimensional Latent Space Structure in Random Networks
Bangachev, Kiril
A probabilistic latent space graph PLSG (n, Ω, D, σ) is parametrized by its number of vertices n, a&#13;
probability distribution D over some latent space Omega,  and a connection function [mathematical function] such that [mathematical formula] almost surely with respect to D. To sample from [mathematical notations], first for each node [mathematical formula] an independent latent (feature) vector x_i is drawn from Omega according to D. Then, for each pair of vertices i and j an edge is drawn independently with probability sigma(x_i,x_j).$ Interest in settings of high-dimensional latent spaces $\Omega$ has surged in recent years due to the rise of high-dimensional data and powerful compute.&#13;
&#13;
The features x₁, x₂, . . . , xₙ are oftentimes hidden due to privacy considerations or absence of measurement. This gives rise to many challenging statistical tasks. A prerequisite for nearly any more sophisticated inference and estimation task is the following simple hypothesis testing question. When can we even test for the presence of high-dimensional latent space structure? When is there a computationally efficient test and what could this computationally efficient test be? We address the following aspects of these questions in the thesis.&#13;
&#13;
Chapter 2: We focus on the canonical geometric setting when latent vectors are distributed uniformly over the sphere [mathematical formula] where Tₚ is such that expected graph density is p. A conjecture that has witnessed continuous interest and progress in the past 15 years is that the information-theoretically optimal test for detecting the spherical random geometric graph is the signed triangle count. We contribute to the existing literature by confirming that the signed triangle count is computationally optimal among low-degree polynomial tests. Our main technical ingredient is a strategy for bounding Fourier coefficients of random geometric graphs based on a representation of spherical random geometric graphs as Erdős-Rényi with few planted edges. This part of the thesis is based on [BB24b].&#13;
&#13;
Chapter 3: The conjectured optimality of the signed triangle count and the relavance of triangle-based statistics to the axiomatic triangle inequality of metric spaces have led to the conventional wisdom that triangle-based statistics are optimal in monotone random geometric graphs. We break this intuition by showing that in the case of a sup-norm geometry over the torus, the signed 4-cycle count is strictly stronger than the signed triangle count and is, furthermore, optimal among low-degree tests. Our main technical contribution is a novel strategy for bounding Fourier coefficients of random geometric graphs mimicking the cluster-expansion formula from statistical physics. This part of the thesis is based on [BB24a].&#13;
&#13;
Chapter 4: While random geometric graphs over the sphere with Euclidean geometry and the torus with sup-norm geometry are interesting mathematically, they are perhaps too simplistic to describe real-world networks. Hence, one should ask to what extent the results and techniques used for these models generalize to other probabilistic latent space graphs. We introduce a new family of probabilistic latent space graphs which we call random algebraic graphs. In random algebraic graphs, Omega is an algebraic group and sigma is compatible with the group structure. This family captures the aforementioned random geometric graphs as well as instances of the stochastic block model and random subgraphs of Cayley graphs. We have two sets of results. First, we develop a general criterion based solely on the magnitudes of Fourier coefficients of sigma for the statistical hardness of detecting a random algebraic graph when the underlying group is the Boolean hypercube. We use this result to provide a uniform approach to many previously known results in the literature, but also highlight that certain structural properties of the connection function such as non-trivial symmetries and non-monotonicity yield novel behavior. Second, we exhibit a universal behavior for the impossibility of detecting a random algebraic graph based solely on the group size but not on the group structure. The result can be equivalently phrased in terms of the local structure of typical Cayley graphs. This part of the thesis is based on [BB23].
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factorization and Compositional Generalization in Diffusion Models</title>
<link href="https://hdl.handle.net/1721.1/158507" rel="alternate"/>
<author>
<name>Liang, Qiyao</name>
</author>
<id>https://hdl.handle.net/1721.1/158507</id>
<updated>2025-04-08T04:30:30Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Factorization and Compositional Generalization in Diffusion Models
Liang, Qiyao
One of the defining features of human intelligence is compositionality—the ability to generate an infinite array of complex ideas from a limited set of components. This capacity allows for the creation of novel and intricate combinations of arbitrary concepts, enabling potentially infinite expressive power from finite learning experiences. A likely prerequisite for the emergence of compositionality is the development of factorized representations of distinct features of variation in the world. However, the precise mechanisms behind the formation of these factorized representations in the human brain, and their connection to compositionality, remain unclear. Diffusion models are capable of generating photorealistic images that combine elements not co-occurring in the training set, demonstrating their ability to compositionally generalize. Yet, the underlying mechanisms of such compositionality and its acquisition through learning are still not well understood. Additionally, the relationship between forming factorized representations of distinct features and a model’s capacity for compositional generalization is not fully elucidated. In this thesis, we explore a simplified setting to investigate whether diffusion models can learn semantically meaningful and fully factorized representations of composable features. We conduct extensive controlled experiments on conditional diffusion models trained to generate various forms of 2D Gaussian data. Through preliminary investigations, we identify three distinct learning phases in the model, revealing that while overall learning rates depend on dataset density, the rates for independent generative factors do not. Moreover, our findings show that models can represent continuous features of variation with semi-continuous, factorized manifolds, resulting in superior compositionality but limited interpolation over unseen values. Based on our investigations, we propose a more data-efficient training scheme for diffusion models and suggest potential future architectures for more robust and efficient generative models.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fabrication and Testing of A Middle-Ear Implanted Microphone</title>
<link href="https://hdl.handle.net/1721.1/158504" rel="alternate"/>
<author>
<name>Wawrzynek, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/158504</id>
<updated>2025-04-08T04:19:30Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Fabrication and Testing of A Middle-Ear Implanted Microphone
Wawrzynek, Emma
Cochlear implants are devices that can restore hearing to people with sensorineural deafness. Despite their name, cochlear implants rely on an external unit which contains components such as a microphone. This work presents the design, fabrication, and testing of an implantable middle-ear microphone called the “UmboMic” that measures the displacement of the tympanic membrane at the umbo. Particular consideration is paid to the biocompatability of the microphone and its long-term durability in the body. The work discusses biocompatible materials, methods of encapsulation, and techniques for testing device robustness. &#13;
&#13;
The UmboMic is a piezoelectric displacement sensor that is implanted in the middle ear cavity and contacts the umbo. As the umbo moves, it displaces the UmboMic, resulting in a charge that is amplified with a custom amplifier. The active area of the UmboMic is a triangular shaped cantilever made from two layers of piezoelectric thin film called polyvinylidene fluoride (PVDF). The bimorph design reduces common mode noise as compared to our previous microphone designs. &#13;
&#13;
Extensive bench testing and experiments in fresh human cadavers demonstrates excellent microphone performance despite the use of biocompatible materials. The UmboMic sensor is well shielded against electromagnetic interference, tolerant to implantation variations, and can be repeatably fabricated with little difference between sensor performances. It demonstrates high sensitivity from 100 Hz to above 8 kHz, with a sensitivity of 58 fC/Pa at 1 kHz and 230 fC/Pa at 2 kHz when including the outer ear. The noisefloor of the UmboMic normalized over 1/3 octave bins is 10⁻² fC, and the A-weighted equivalent input noise of the UmboMic with the outer ear is 82.4 dB SPL from 100 Hz to 7 kHz. When tested in five different human cadavers, the UmboMic sensors work reliably despite anatomical differences. &#13;
&#13;
Internalizing the entire cochlear implant would greatly improve the quality of life of wearers. In its current form, cochlear implants cannot be used during sleep and vigorous activity, are susceptible to noise from wind, and function poorly in loud environments. Implanting the device would mitigate these problems and provide users with the discretion of an invisible device. Our prototype demonstrates the feasibility of an implanted microphone and is an important step towards developing a totally implantable cochlear implant.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structured Handwritten Input for Dementia Classification</title>
<link href="https://hdl.handle.net/1721.1/158498" rel="alternate"/>
<author>
<name>Flores, Gerardo</name>
</author>
<id>https://hdl.handle.net/1721.1/158498</id>
<updated>2025-04-07T08:57:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Structured Handwritten Input for Dementia Classification
Flores, Gerardo
We explore the use of deep learning to score the Digit Symbol Substitution Test (DSST), a paper-and-pencil behavioral test useful in diagnosing Alzheimer’s. We train a model to classify Alzheimer’s based on the subject’s responses to any one of the 108 queries in the test. We then combine predictions across the test to produce a new classifier that is considerably stronger. We also make an extensive search of architectures and optimization techniques that have proved useful in other settings. The ultimate result is a very strong classifier, with AUC for a response to a single question of 86% and for an overall patient of 97.25%.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatially-Adaptive LiDAR and Underwater Communications Using Integrated Optical Phased Arrays</title>
<link href="https://hdl.handle.net/1721.1/158495" rel="alternate"/>
<author>
<name>DeSantis, Daniel Markus</name>
</author>
<id>https://hdl.handle.net/1721.1/158495</id>
<updated>2025-04-07T08:27:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Spatially-Adaptive LiDAR and Underwater Communications Using Integrated Optical Phased Arrays
DeSantis, Daniel Markus
Silicon-photonics microsystems have enabled advanced optoelectronic capabilities in applications spanning from sensors to communication systems. In particular, integrated optical-phased-array-based (OPA-based) technologies, such as solid-state LiDAR and free-space optical communications (FSOC) systems, show promise to revolutionize the way we sense and communicate. This thesis enables new integrated-OPA-based solid-state beam-steering capabilities for these existing applications, as well as emerging spatially- and spectrally-demanding applications. First, we develop and experimentally demonstrate a novel multi-beam solid-state OPA-based LiDAR system capable of detecting and ranging multiple targets simultaneously, passively, and without rastering. Through this work, we demonstrate a new spatially-adaptive sensing modality for solid-state LiDAR that promises to reduce the data deluge associated with LiDAR sensing for autonomous systems. Second, we show the first, to the best of our knowledge, spiral integrated OPAs, enabling emission of focusing beams with tunable variable focal heights for the first time. This work introduces a first-of-its-kind integrated OPA architecture and, as such, enables new functionality for emerging applications of OPAs that require focusing operation, such as biophotonic optical tweezers and chip-based 3D printers. Third, we show the first visible-light integrated-OPA-based FSOC transmitter and use it to experimentally demonstrate the first integrated-OPA-based underwater-wireless-optical-communication (UWOC) link. This integrated OPA transmitter chip can reduce the size, weight, and mechanical complexity of apparatus for UWOC systems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributional Private Information Retrieval</title>
<link href="https://hdl.handle.net/1721.1/158492" rel="alternate"/>
<author>
<name>Lehmkuhl, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/158492</id>
<updated>2025-04-07T09:02:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Distributional Private Information Retrieval
Lehmkuhl, Ryan
A private-information-retrieval (PIR) scheme lets a client fetch a record from a remote database without revealing which record it has fetched. Classic PIR schemes treat all database records the same but, in practice, some database records are much more popular (i.e., commonly fetched) than others. We introduce distributional private information retrieval, a new type of PIR that can run faster than classic PIR—both asymptotically and concretely—when the popularity distribution is heavily skewed. Distributional PIR provides exactly the same cryptographic privacy notion as classic PIR. The speedup comes from providing a relaxed form of correctness: distributional PIR guarantees reliable retrieval for PIR queries that follow the popularity distribution, but only “best-effort” retrieval for out-of-distribution queries. We give several constructions of distributional-PIR schemes that make black-box use of existing standard PIR protocols. On a popularity distribution drawn from real-world Twitter data, distributional PIR reduces compute costs by 5.1–77× compared to existing techniques. Finally, we build CrowdSurf, an end-to-end system for privately streaming social-media posts, and show that our PIR schemes reduce the end-to-end server cost by 8×.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Enhancing Robustness and Generalization in Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/158491" rel="alternate"/>
<author>
<name>Schechter, Amit</name>
</author>
<id>https://hdl.handle.net/1721.1/158491</id>
<updated>2025-04-07T09:24:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Methods for Enhancing Robustness and Generalization in Machine Learning
Schechter, Amit
We propose two methods for improving subgroup robustness and out of distribution generalization of machine learning models. First we introduce a formulation of Group DRO with soft group assignment. This formulation can be applied to data with noisy or uncertain group labels, or when only a small subset of the training data has group labels. We propose a modified loss function, explain how to apply it to data with noisy group labels as well as data with missing or few group labels, and perform experiments to demonstrate its effectiveness. In the second part, we propose an invariant decision tree objective that aims to improve the robustness of tree-based models and address a common failure mode of existing methods for out-of-domain generalization. We demonstrate the benefits of this method both theoretically and empirically. Both these approaches are designed to enhance machine learning models’ performance under distribution shift.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Epistemic Uncertainty of Predictive Action Models and Sampling-Based Motion Planners for Robotic Manipulation</title>
<link href="https://hdl.handle.net/1721.1/158490" rel="alternate"/>
<author>
<name>Shaw, Seiji A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158490</id>
<updated>2025-04-07T08:44:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Characterizing the Epistemic Uncertainty of Predictive Action Models and Sampling-Based Motion Planners for Robotic Manipulation
Shaw, Seiji A.
We derive methods to represent the epistemic uncertainty of models used in long-horizon robot planning problems in autonomous manipulation. We develop a representation of epistemic uncertainty for two types of models: uncertainty over the physical parameters of a model that predicts the observed outcome of a manipulation action and uncertainty over a geometric graph built by a sampling-based motion planner as a representation of the configuration space to answer a motion planning query. We propose a simple planning system that integrates these uncertainty characterizations to reason about the informational value of executing a manipulation action or allocating a number of samples to a sampling-based motion planner.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Burst Imaging with Learned Continuous Kernels</title>
<link href="https://hdl.handle.net/1721.1/158488" rel="alternate"/>
<author>
<name>Biscarrat, Camille</name>
</author>
<id>https://hdl.handle.net/1721.1/158488</id>
<updated>2025-04-07T08:59:17Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Burst Imaging with Learned Continuous Kernels
Biscarrat, Camille
Burst imaging is a technique that consists of taking multiple images in quick succession and merging them into one output image. By aligning and combining data from multiple frames, we can increase resolution, attenuate noise, reduce motion blur and expand the dynamic range to obtain a higher quality image. In this thesis, we propose a method that learns continuous kernels to process and merge burst frames. We show that the learned kernels adapt to local image information and take advantage of sub-pixel sample location information to demosaic, denoise and merge the burst into a high quality output.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Magnetic Weyl Semimetals for Spintronic Applications</title>
<link href="https://hdl.handle.net/1721.1/158487" rel="alternate"/>
<author>
<name>He, Zhiping</name>
</author>
<id>https://hdl.handle.net/1721.1/158487</id>
<updated>2025-04-07T08:41:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Magnetic Weyl Semimetals for Spintronic Applications
He, Zhiping
Magnetic Weyl semimetals are a category of topological materials that hold promise for spintronic applications due to their unconventional transport properties, which arise from both bulk and surface topological states, as well as the rich interplay between band topology and magnetism. Among the family of semimetallic materials, the antiferromagnetic Weyl semimetals Mn₃X (X=Sn, Ge, etc.) and the ferromagnetic Weyl semimetal Co₂MnGa have attracted significant interest. So far, despite extensive theoretical and experimental investigations, the magnetic dynamics of Mn₃X and the spin-polarized tunneling in Co₂MnGa based spintronic devices remain not fully explored.&#13;
&#13;
In this thesis, I establish a theoretical framework to describe the low energy dynamics of strained Mn₃X. Using perturbation theory, I identify three distinct dynamic modes and derive a Landau-Lifshitz-Gilbert (LLG)-like equation to describe uniform mode dynamics. I also analyze the excitation of dissipative spin waves and the spin superfluidity state in Mn₃X by extending the model to include spatial inhomogeneity. The analytical results are validated against numerical simulations based on fully coupled LLG equations, where good agreement is achieved. In addition, I study fully epitaxial magnetic tunnel junctions (MTJs) composed of Co₂MnGa. By growing Co₂MnGa/MgO/Co₂MnGa stacks under different conditions, I develop a series of MTJs with varying degrees of chemical ordering in the Weyl semimetal electrodes and compare their tunneling magnetoresistance (TMR). I find that the TMR is enhanced with the improvement of the chemical ordering in Co₂MnGa. Our results reveal the relationship between the spin tunneling in MTJs and the chemical order of Co₂MnGa electrodes, offering insights into further enhancing TMR through Weyl semimetal engineering.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Superparamagnetic Tunnel Junctions for Reliable True Randomness and Efficient Probabilistic Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/158486" rel="alternate"/>
<author>
<name>Koh, Dooyong</name>
</author>
<id>https://hdl.handle.net/1721.1/158486</id>
<updated>2025-04-07T08:50:45Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Superparamagnetic Tunnel Junctions for Reliable True Randomness and Efficient Probabilistic Machine Learning
Koh, Dooyong
Physical devices exhibiting stochastic functions with low energy consumption and high device density have the potential to enable complex probability-based computing algorithms, accelerate machine learning tasks, and enhance hardware security. Recently, superparamagnetic tunnel junctions (sMTJs) have been widely explored for such purposes, leading to the development of limited-scale sMTJ-based systems. Existing sMTJs face significant scalability and reliability issues, however, because their intrinsically low energy barrier and correspondingly small device area result in high sensitivity to external perturbations, as well as large variations from device to device. Here, we present an experimental demonstration of three-terminal sMTJs as reliable and potentially scalable sources of true randomness in the field-free regime. By leveraging dual-current controllability and incorporating feedback, we stabilize the switching operation of superparamagnets and reach cryptographic-quality random bitstreams. The realization of controllable and robust true random sMTJs underpin a general hardware platform for computing schemes exploiting the stochasticity in the physical world, as demonstrated by the generative artificial intelligence example in our experiment. Furthermore, we experimentally demonstrate a novel method of utilizing sMTJs as stochastic analog-to-digital converters (sADCs) in a crossbar array architecture for neural network acceleration, showing performance comparable to software implementations. This work highlights the potential of sMTJs to revolutionize energy-efficient computing and provides a foundation for future advancements in probabilistic computing and hardware security.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Model Tools for Project-based Learning</title>
<link href="https://hdl.handle.net/1721.1/158485" rel="alternate"/>
<author>
<name>Ravi, Prerna</name>
</author>
<id>https://hdl.handle.net/1721.1/158485</id>
<updated>2025-04-07T09:24:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Large Language Model Tools for Project-based Learning
Ravi, Prerna
Project-Based Learning (PBL) has emerged as a prominent educational approach that im- merses students in meaningful, real-world tasks, fostering deep and lasting learning experiences. Unlike traditional instructional methods, PBL emphasizes a student-centered pedagogy, where learners actively construct knowledge through exploration, collaboration, and reflection. This approach not only nurtures a love of learning but also encourages students to form personal con- nections to their academic experiences, making education more relevant and impactful. How- ever, while PBL offers significant educational benefits, it also presents challenges for educators, including the complexities of designing and managing projects, assessing student learning, and balancing student autonomy with guided instruction.. The advent of artificial intelligence (AI), particularly large language models (LLMs), holds promise for addressing these challenges by en- hancing personalized learning, automating administrative tasks, and providing real-time feed- back. To ensure that these AI tools are sustainable and conducive to diverse classroom contexts, it is crucial to involve educators in the design process from the outset.&#13;
&#13;
This thesis contributes to the intersection of PBL and generative AI by documenting a co- design process with interdisciplinary K-12 teachers aimed at integrating AI into PBL pedagogy. Through need-finding interviews, collaborative workshops, and iterative tool design, this re- search explores how AI can support teachers in implementing high quality PBL while maintaining the integrity of student-centered learning. We also investigate how this technology can augment the current roles of teachers without replacing them, and support their professional growth.&#13;
&#13;
The thesis is structured around three key objectives: exploring the challenges educators face with PBL, co-designing AI tools that address these challenges, and proposing design guidelines for future AI tools in PBL classrooms. By refining the design of AI-powered PBL tools, enhancing teacher professional development resources, and ensuring these tools are accessible and equitable, educators will be better equipped to foster engaging, student-centered learning environments. These contributions not only encourage future research and development of AI educational tools, but also aim to foster a more immersive and constructionist learning approach in classrooms.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpretable and Automated Bias Detection for AI in Healthcare</title>
<link href="https://hdl.handle.net/1721.1/158474" rel="alternate"/>
<author>
<name>Alexiev, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/158474</id>
<updated>2025-04-07T09:05:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Interpretable and Automated Bias Detection for AI in Healthcare
Alexiev, Christopher
Biases in artificial intelligence systems and the data they operate over are a major hurdle to their application in clinical and biomedical settings. Such systems have frequently been shown to fail to generalize from their training data to the real world environment and often display differing levels of accuracy over different population subgroups, which has detrimental effects on patients' quality of care and on healthcare equality. Here, we introduce an automated framework for identifying and understanding nontrivial sources of bias in healthcare datasets and AI models. Our framework is data and model agnostic and does not rely on human-developed heuristics or assumptions to uncover bias. We demonstrate its effectiveness by uncovering serious and nontrivial sources of bias in three widely used clinical datasets and one biomedical dataset, over the diverse tasks of diabetes risk prediction, lung cancer risk prediction, and biomolecular toxicity prediction. Our framework is used to uncover biases caused by patient BMI and computed tomography (CT) scanner type in the data used by a cutting-edge lung cancer risk prediction AI model, causing AUC drops on the order of ten percent.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical Characterisation of Strain and Defects in 2D Photonic Materials</title>
<link href="https://hdl.handle.net/1721.1/158473" rel="alternate"/>
<author>
<name>Mukherjee, Abhishek</name>
</author>
<id>https://hdl.handle.net/1721.1/158473</id>
<updated>2025-04-07T08:37:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Optical Characterisation of Strain and Defects in 2D Photonic Materials
Mukherjee, Abhishek
Strain and defect engineering have shown to be powerful tools in modifying optoelectronic properties of semiconductors. This thesis aims to advance the fundamental understanding of electronic and optical properties in material systems with broken inversion symmetries and to use this understanding to engineer in-situ, localized strain fields for tailoring photonic responses at the nanoscale. We will address the fundamental question: How can we characterize the effect of strain and defects in two-dimensional photonic materials? To this end, we open with a review of current strategies in strain engineering, its fundamental consequences on electronic, optical, and magnetic properties, and the state-of-the-art applications of this technology in achieving band-gap-engineered straintronic devices. Touching on the advent of strain engineering for flexoelectricity - a spontaneous material polarization produced by a strain gradient that lifts the inversion symmetry, which can enable a bulk photogalvanic effect, we posit the aspect of meta-valent bonding in materials having a key role in this, by showing that the majority of prime material candidates known to have exhibit large photogalvanic response exhibit this characteristic. The rest of the thesis focuses on characterizing layered metal thio(seleno)phosphates, a family of materials known for their magnetic, electronic, and nonlinear optical properties. We show how the optical properties of these materials can be modulated via different means of defects and strain. These photoactive materials can be pivotal to a future comprising of strain-engineered flexoelectric devices, which take advantage of the bulk photogalvanic effect, to develop a new family of practical, deployable, self-powered, and low-cost photodetectors, and integrated arrays with limits-breaking performance in the UV-to-LWIR spectral bands.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-accuracy, speed-optimized positioning system for electron beam lithography</title>
<link href="https://hdl.handle.net/1721.1/158470" rel="alternate"/>
<author>
<name>Dadok, Luděk.</name>
</author>
<id>https://hdl.handle.net/1721.1/158470</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">High-accuracy, speed-optimized positioning system for electron beam lithography
Dadok, Luděk.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1982; Vita.; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transfer function of heavy duty gas turbine combustor components.</title>
<link href="https://hdl.handle.net/1721.1/158468" rel="alternate"/>
<author>
<name>Farrell, Thomas Dominic.</name>
</author>
<id>https://hdl.handle.net/1721.1/158468</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Transfer function of heavy duty gas turbine combustor components.
Farrell, Thomas Dominic.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of iron tricarbonyl complexes.</title>
<link href="https://hdl.handle.net/1721.1/158467" rel="alternate"/>
<author>
<name>Fanelli, Joseph John.</name>
</author>
<id>https://hdl.handle.net/1721.1/158467</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A study of iron tricarbonyl complexes.
Fanelli, Joseph John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of braking on automobile vehicle dynamics.</title>
<link href="https://hdl.handle.net/1721.1/158466" rel="alternate"/>
<author>
<name>Evans, David Gordon.</name>
</author>
<id>https://hdl.handle.net/1721.1/158466</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The effect of braking on automobile vehicle dynamics.
Evans, David Gordon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fractographic investigation of crack-closure.</title>
<link href="https://hdl.handle.net/1721.1/158465" rel="alternate"/>
<author>
<name>Faral, Michel.</name>
</author>
<id>https://hdl.handle.net/1721.1/158465</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Fractographic investigation of crack-closure.
Faral, Michel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An accident and seismic containment reliability study including statistical uncertainty</title>
<link href="https://hdl.handle.net/1721.1/158464" rel="alternate"/>
<author>
<name>Fardis, M. N.
            (Michael N.)</name>
</author>
<id>https://hdl.handle.net/1721.1/158464</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">An accident and seismic containment reliability study including statistical uncertainty
Fardis, M. N.
            (Michael N.)
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1978; Bibliography: leaves 180-183.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A piezoelectric force measuring system for human mobility analysis.</title>
<link href="https://hdl.handle.net/1721.1/158463" rel="alternate"/>
<author>
<name>Estey, Paul Norman.</name>
</author>
<id>https://hdl.handle.net/1721.1/158463</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A piezoelectric force measuring system for human mobility analysis.
Estey, Paul Norman.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Bibliography: leaves 178-182.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of methods of determining flood damages and of evaluating flood control benefits</title>
<link href="https://hdl.handle.net/1721.1/158456" rel="alternate"/>
<author>
<name>Lampert, James B.
            (James Benjamin),
            1914-1978.</name>
</author>
<id>https://hdl.handle.net/1721.1/158456</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1939-01-01T00:00:00Z</published>
<summary type="text">A study of methods of determining flood damages and of evaluating flood control benefits
Lampert, James B.
            (James Benjamin),
            1914-1978.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1939; Includes bibliographical references (leaf 101).
</summary>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal shock resistance of ceramics.</title>
<link href="https://hdl.handle.net/1721.1/158452" rel="alternate"/>
<author>
<name>Goodof, Robert Steven.</name>
</author>
<id>https://hdl.handle.net/1721.1/158452</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Thermal shock resistance of ceramics.
Goodof, Robert Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovative Structural and Mechanical Satellite Systems</title>
<link href="https://hdl.handle.net/1721.1/158321" rel="alternate"/>
<author>
<name>Thomas, Annika</name>
</author>
<id>https://hdl.handle.net/1721.1/158321</id>
<updated>2025-04-08T04:46:55Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Innovative Structural and Mechanical Satellite Systems
Thomas, Annika
This thesis covers two topics within the field of satellite mechanical engineering. The first topic covered is the structural and thermal design and validation BeaverCube 2 Earth-imaging CubeSat. The second topic covered is the electromagnetics modeling and simulation of inductive spin drive for a novel magnetically levitated spherical control moment gyroscope for satellite attutide control.&#13;
&#13;
For the first topic on BeaverCube 2, the key tasks were to design and assemble the structure of the CubeSat, ensure that subsystems maintain their operating temperatures on orbit, and validate the structural integrity of the CubeSat structure during launch. We design and manufacture 24 components that integrate all subsystems of BeaverCube 2 and meet the size requirements of a 3U (3 x (100cm3)) CubeSat, including a chassis, panels, payload structure and connectors for the stack of boards. Next, we ensure that all subsystems of the satellite do not exceed their temperature limits through analytical and simulated thermal analysis, showing that during worst case hot (70∘ beta angle) and worst case cold (70∘ beta angle) orbits, no subsystem reaches within 5 ∘C of its operating temperature limits. Finally, we analyze the structure of BeaverCube 2 to validate that the components can structurally withstand the 4-7 G linear accelerations, 13.5 rad/s radial accelerations, 1200 N side rail loads, and random vibration environment that may be experienced during launch [1]. The design is shown to be robust in these conditions, with margins of safety of stress ranging from 19.97 to 37.56 and deformation of the stack of circuit boards not exceeding 0.05 mm. The minimum frequencies of modes of vibration throughout the structure occur at 623 Hz, which is well above the allowed minimum mode of 100 Hz.&#13;
&#13;
For the second topic of modeling the spherical control moment gyroscope, the key tasks were to design an actuation method using inductive drive and to experimentally validate a closed-loop controller for suspension of a prototype. For the actuation method, we present the electromagnetics modeling of an inductive spin drive, including analytical derivations of a bulk conductivity model and a skin current model. The analytical skin model shows that inductive drive with a rotating dipole magnetic field can generate a peak value 130 &#120583;Nm of torque. We simulate both models with a rotating dipole and a rotating quadrupole stator drive configuration. Next, we successfully magnetically levitate a permanent magnet rotor prototype. We develop an analytical plant model for the system and a controller for closed-loop suspension with 40 Hz crossover and 20∘ phase margin, then we present preliminary experimental results.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Screen Time</title>
<link href="https://hdl.handle.net/1721.1/158318" rel="alternate"/>
<author>
<name>Landman, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/158318</id>
<updated>2025-04-08T04:41:06Z</updated>
<published>2021-02-01T00:00:00Z</published>
<summary type="text">Screen Time
Landman, Jeffrey
In Times Square, architecture is inextricable from mediated representations. The place is dislocated by the screens that envelop its buildings and the other screens, around the world, upon which its image is ceaselessly presented. The neighborhood itself is named after the Times Tower, which was opened in 1905 as the office and printing press of The New York Times, and remains at the center of the square today, entirely empty, voided by the advertising value of its screens. But this condition is not a contemporary anomaly. If the screens, flowed through by consumer desire, currently vaporise the building’s edge, in 1904, before it was even occupied, the building summoned the city with the results of the general election, broadcast to the metropolis via searchlight. The building has always extended its edge, projecting public messages while concealing private concerns.&#13;
&#13;
This thesis understands the building as one actor in a media apparatus: a network of interconnections between broadcasting devices and media, infrastructure, public and political events, development policy and financial systems. The Tower indexes 20th century architecture’s participation in this media apparatus, telling a story in which communication and the distribution of power predate and outlast inhabitation, a story in which occupation is not part of the program. The thesis tracks the tower through six innovative broadcasting devices which the building sponsored, including the world’s first moving electric sign, the New Year’s Eve Ball, the world’s first changeable architectural screen, and the world’s largest open architectural competition. &#13;
&#13;
The form of the thesis is a short movie that uses found footage and computer generated animations to apprehend the Tower amid its myriad images. In designing for animated representation the thesis is positioned in a lineage of paper architectures, proposing a form of architectural production which embraces and redirects the forces of the media apparatus. The movie reconfigures, misaligns and misuses its historical sources to reproduce and subvert the Screen Time from which architecture can now never be distinct.
</summary>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Product Purity Prediction and Anomaly Detection for an Automated Peptide Manufacturing Platform</title>
<link href="https://hdl.handle.net/1721.1/158317" rel="alternate"/>
<author>
<name>Yang, Liudi</name>
</author>
<id>https://hdl.handle.net/1721.1/158317</id>
<updated>2025-04-08T04:46:19Z</updated>
<published>2020-09-01T00:00:00Z</published>
<summary type="text">Product Purity Prediction and Anomaly Detection for an Automated Peptide Manufacturing Platform
Yang, Liudi
This thesis aims to develop and deploy a method of predicting product purity and automating anomaly detection for Mytide Therapeutics’ peptide manufacturing platform. A baseline study revealed how early purity prediction and anomaly reporting could decrease the production cycle time, manual data review, and chemical waste produced by the synthesis process. The most important tool for making purity predictions is UV absorption on the byproducts and excess reagents that come out of the reactor, where the peptides are made. A large part of this thesis was improving the quality of the UV data in order to make purity predictions using the improved UV traces. Sensor data from historical runs, including pressure, temperature, and flow rates, were analyzed to characterize several common anomalies. The reporting system takes in live data and alerts the relevant parties when the limits are reached, so that corrective action can be implemented quickly. The anomaly tracking code also generates a report to either be viewed on the user interface or stored in the backend database with the run’s historical data. Implementation of the described system improvements had several positive impacts on the workflow. The live anomaly alerts allowed for issues to be reported to the relevant parties upon occurrence, which increased the uptime of the system. The anomaly report, which is tagged to each peptide synthesis run, allows for historical data evaluation and easy decision-making for advancing the peptide to the next step of the process. The purity prediction allowed for earlier identification of certain poor-purity peptides by 27% of the production time. Together, these system improvements helped to advance the company’s peptide manufacturing platform towards total automated decision-making.
</summary>
<dc:date>2020-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Biopolitics from below?” — Lessons of Emergent Urban Governance Trend Under Covid-19 in China</title>
<link href="https://hdl.handle.net/1721.1/158308" rel="alternate"/>
<author>
<name>Shao, Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/158308</id>
<updated>2025-04-08T04:49:38Z</updated>
<published>2021-02-01T00:00:00Z</published>
<summary type="text">“Biopolitics from below?” — Lessons of Emergent Urban Governance Trend Under Covid-19 in China
Shao, Yu
This thesis interrogates COVID-19 emergent urban governance trends in China in response to the COVID-19 crisis, with a particular focus on the use of the narratives of epidemic and state emergency, as well as the governance strategies during the pandemic and in the socalled post-COVID era. More importantly, this thesis intends to investigate people’s responses towards emergency policies—the compliances and creative strategies that people have adopted to demonstrate their resistance. Using a combination of ethnographic data and archival research, this thesis covers five major themes: a) the impacts that different outbreak narratives perpetuated on the Internet; b) left-wing scholars’ view (or hope) for the rise of socialism and how the Chinese state has used the socialist narrative to build up its international image; c) the strong comeback of capitalist practices the pandemic exacerbated the precariousness of work; d) how the pandemic has been used as a justification to impose panoptic surveillance and control on Chinese citizens and asked for absolute obedience towards government policies, as well as how the formulaic practices dominated the post-COVID landscape; and finally, e) people’s response and sentiments to government policies such as lockdowns and social distancing displayed on social media platforms. It concludes by arguing that even in an autocratic state with increasingly tightened control justified by the epidemic, people are not passive recipients of such policies. They have come up with creative strategies to express their resistance and exhibit negotiation with the policies. It further argues that in China, COVID-19 has aroused a new wave of active civil participation, for citizens to discuss politics openly, starting from pandemic related topics to the freedom of speech at large. Complicating what Panagiotis Sotiris terms biopolitics from below, it suggests that the creative posts on social media platforms are a savvy means of claiming back our bodies.
</summary>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying Statistical Analysis and Machine Learning to Improve the Ice Sensing Algorithm</title>
<link href="https://hdl.handle.net/1721.1/158268" rel="alternate"/>
<author>
<name>Herron, Lucas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158268</id>
<updated>2025-04-08T04:16:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Applying Statistical Analysis and Machine Learning to Improve the Ice Sensing Algorithm
Herron, Lucas A.
The detection of sea ice is a major problem faced by Argo floats operating in polar regions. In these areas, the presence of sea ice threatens to damage or destroy floats in the event of an impact at the surface. While methods have been proposed and implemented to combat this danger, the most successful of which is the Ice Sensing Algorithm (ISA), further work is necessary to fully mitigate the risks, particularly in the Arctic. In this analysis, past CTD profiles from the Arctic are compiled and matched with sea ice data to examine the performance of the ISA and recommend potential changes and new methods to further improve its accuracy. This is accomplished by fitting the data to statistical and machine learning models to predict the presence of ice and analyzing the results. Results show that both modifications to current methods and the inclusion of new variables may increase the predictive power of the ISA. Specifically, the analysis shows that the use of point measurements (as opposed to a metric over a pressure range) at the shallowest allowable depth provides the best performance. The additional inclusion of practical salinity and time of year as predictive variables also increases the performance of the algorithm. Results and statistics on the performance of the algorithm are provided and analyzed in various regions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Robotic Manipulation in Remote Environments with Shared Autonomy</title>
<link href="https://hdl.handle.net/1721.1/158267" rel="alternate"/>
<author>
<name>Phung, Amy</name>
</author>
<id>https://hdl.handle.net/1721.1/158267</id>
<updated>2025-04-07T09:18:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enabling Robotic Manipulation in Remote Environments with Shared Autonomy
Phung, Amy
The evolution of robotics technology continues to facilitate exploration and scientific study in remote environments, enabling research in areas that were previously impossible to reach. Robots operating in space and marine environments encounter similar operational challenges, as both face high operational costs, bandwidth-limited conditions, and natural, unstructured environments where dynamic obstacles might be present. Within the oceanographic domain, conventional deep-sea sampling operations involve remotely operated vehicles (ROVs) equipped with robotic manipulator arms to complete dexterous tasks at depth. While effective, deep-sea ROV operations require specialized instrumentation, highly trained shipboard personnel, and large oceanographic vessels, which make deep-sea samples inaccessible to most.&#13;
This thesis presents the SHared Autonomy for Remote Collaboration (SHARC) framework, and evaluates its utility within an oceanographic context. By leveraging shared autonomy, SHARC enables shore-side operators to collaboratively carry out underwater sampling and manipulation tasks, regardless of their prior manipulator operations experience. With SHARC, operators can conduct manipulation tasks using natural language and hand gestures through a virtual reality (VR) interface. The interface provides remote operators with a contextual 3D scene understanding that is updated according to bandwidth availability.&#13;
Evaluation of the SHARC framework through controlled lab experiments indicates that SHARC’s VR interface enables novice operators to complete manipulation tasks in framerate-limited conditions (i.e., &lt;0.5 frames per second) faster than expert pilots using the conventional topside controller. For both novice and expert users, the VR interface also increased the task completion rate and improved sampling precision. During sea trials, SHARC enabled collection of an underwater in-situ X-ray fluorescence (XRF) measurement at more than 1000 meters water depth in the Eastern Pacific with centimeter-level precision by remote scientists with no prior piloting experience. This demonstration provides compelling evidence of SHARC’s utility for conducting delicate operations in unstructured environments across bandwidth-limited communications, which holds relevance for improving operations in other sensitive domains where dexterity is required. SHARC’s ability to relax infrastructure requirements and engage novice shore-side users provides a promising avenue for democratizing access to deep-sea research.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of compensators for double integrator plants</title>
<link href="https://hdl.handle.net/1721.1/158231" rel="alternate"/>
<author>
<name>Schwartz, Adam L.</name>
</author>
<id>https://hdl.handle.net/1721.1/158231</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1989-01-01T00:00:00Z</published>
<summary type="text">Comparison of compensators for double integrator plants
Schwartz, Adam L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1989; Includes bibliographical references (leaves 186-189).
</summary>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Issues in new product development--the introduction of tape automated bonding technology</title>
<link href="https://hdl.handle.net/1721.1/158228" rel="alternate"/>
<author>
<name>Maggs, Virginia Loop.</name>
</author>
<id>https://hdl.handle.net/1721.1/158228</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1990-01-01T00:00:00Z</published>
<summary type="text">Issues in new product development--the introduction of tape automated bonding technology
Maggs, Virginia Loop.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1990; Includes bibliographical references (leaves 142-144).
</summary>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characteristics of electric strain gages at low temperatures</title>
<link href="https://hdl.handle.net/1721.1/158225" rel="alternate"/>
<author>
<name>Sevand, Ali H.</name>
</author>
<author>
<name>Day, Emmett E.</name>
</author>
<id>https://hdl.handle.net/1721.1/158225</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Characteristics of electric strain gages at low temperatures
Sevand, Ali H.; Day, Emmett E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1946; Bibliography: leaf 21.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the reaction of sulfur vapor with a metallic oxide</title>
<link href="https://hdl.handle.net/1721.1/158223" rel="alternate"/>
<author>
<name>Hard, Robert A.</name>
</author>
<id>https://hdl.handle.net/1721.1/158223</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">An investigation of the reaction of sulfur vapor with a metallic oxide
Hard, Robert A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1949; Bibliography: leaf 59.
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A compressible, high frequency numerical model of helicopter noise due to blade/vortex interaction</title>
<link href="https://hdl.handle.net/1721.1/158221" rel="alternate"/>
<author>
<name>Lima, Luiz Hamilton de Resende.</name>
</author>
<id>https://hdl.handle.net/1721.1/158221</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1980-01-01T00:00:00Z</published>
<summary type="text">A compressible, high frequency numerical model of helicopter noise due to blade/vortex interaction
Lima, Luiz Hamilton de Resende.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1980; Includes bibliographical references.
</summary>
<dc:date>1980-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of production smoothing in a job shop environment</title>
<link href="https://hdl.handle.net/1721.1/158219" rel="alternate"/>
<author>
<name>Cruickshanks, Allan Benjamin.</name>
</author>
<author>
<name>Drescher, Robert D.</name>
</author>
<id>https://hdl.handle.net/1721.1/158219</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">A study of production smoothing in a job shop environment
Cruickshanks, Allan Benjamin.; Drescher, Robert D.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Vitae.; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of multi position letter sorting machine operation in the United States Postal Service</title>
<link href="https://hdl.handle.net/1721.1/158218" rel="alternate"/>
<author>
<name>Cruce, A. C.,
            1858-1919.</name>
</author>
<author>
<name>Lee, Jerry Kenneth.</name>
</author>
<id>https://hdl.handle.net/1721.1/158218</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">A study of multi position letter sorting machine operation in the United States Postal Service
Cruce, A. C.,
            1858-1919.; Lee, Jerry Kenneth.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the air bleeds and several typical idling systems of the carburetors</title>
<link href="https://hdl.handle.net/1721.1/158211" rel="alternate"/>
<author>
<name>Ding, Qinghua.</name>
</author>
<id>https://hdl.handle.net/1721.1/158211</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Analysis of the air bleeds and several typical idling systems of the carburetors
Ding, Qinghua.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1946; Bibliography: leaf 59.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Double trouble: Predicting new variant counts across two heterogeneous populations</title>
<link href="https://hdl.handle.net/1721.1/158206" rel="alternate"/>
<author>
<name>Shen, Yunyi</name>
</author>
<id>https://hdl.handle.net/1721.1/158206</id>
<updated>2025-04-07T08:53:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Double trouble: Predicting new variant counts across two heterogeneous populations
Shen, Yunyi
Collecting genomics data across multiple heterogeneous populations (e.g., across different cancer types) has the potential to improve our understanding of disease. Despite sequencing advances, though, resources often remain a constraint when gathering data. So it would be useful for experimental design if experimenters with access to a pilot study could predict the number of new variants they might expect to find in a follow-up study: both the number of new variants shared between the populations and the total across the populations. While many authors have developed prediction methods for the single-population case, we show that these predictions can fare poorly across multiple populations that are heterogeneous. We prove that, surprisingly, a natural extension of a state-of-the-art single-population predictor to multiple populations fails for fundamental reasons. We provide the first predictor for the number of new shared variants and new total variants that can handle heterogeneity in multiple populations. We show that our proposed method works well empirically using real cancer and population genetics data.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Influence of Root Geometry on Soil Cohesion and Anchoring Ability through Geologic Time</title>
<link href="https://hdl.handle.net/1721.1/158205" rel="alternate"/>
<author>
<name>Colicci, Vittorio</name>
</author>
<id>https://hdl.handle.net/1721.1/158205</id>
<updated>2025-04-07T08:33:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Influence of Root Geometry on Soil Cohesion and Anchoring Ability through Geologic Time
Colicci, Vittorio
Vegetation has become ubiquitous among most modern landscapes. However, for much of the Earth’s history, land plants were absent. Their rapid diversification throughout the Devonian and Carboniferous brought about a massive shift in geomorphology and landscape evolution. Complex rooting structures were the principal agents of change, mechanically reinforcing their substrates and generating cohesive sediments through weathering. This work examines the root systems of three major tree genera from these periods: Calamophyton, Lepidodendron, and Calamites. Simplified reconstructions were designed, 3D printed, and uprooted from a sand testbed to explore the effects of root geometry on anchoring ability. Force and displacement data were gathered for each model and used to calculate anchoring strength and uprooting work. Force laws were then derived to approximate the anchoring contributions of root weight, sediment weight, static friction, and shear strength. This analysis revealed a strong dependence on the span, surface area, and volume of the root system, which were used to normalize values across different geometries. The Calamophyton model required the greatest uprooting force per unit length, whereas the Lepidodendron model required the greatest uprooting force per unit area and volume. These results were interpreted within the environmental context of each genus alongside particular features of root geometry. Calamophyton contributed less to soil cohesion due to its simple unbranched architecture, however it likely increased wetland habitability for subsequent species. Meanwhile, Lepidodendron would have bolstered cohesion on account of its densely-packed dichotomous rootlets. Calamites is unique in its clonal reproductive habit and nodal branching architecture, which could have helped it colonize particularly unstable environments. We maintain that the earliest trees played a key role in surface stabilization within their ecosystems and likely paved the way for species that followed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares</title>
<link href="https://hdl.handle.net/1721.1/158204" rel="alternate"/>
<author>
<name>Min, Youngjae</name>
</author>
<id>https://hdl.handle.net/1721.1/158204</id>
<updated>2025-04-07T09:26:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive Least-Squares
Min, Youngjae
While deep neural networks are capable of achieving state-of-the-art performance in various domains, their training typically requires iterating for many passes over the dataset. However, due to computational and memory constraints and potential privacy concerns, storing and accessing all the data is impractical in many real-world scenarios where the data arrives in a stream. In this thesis, we investigate the problem of one-pass learning, in which a model is trained on sequentially arriving data without retraining on previous datapoints. Motivated by the increasing use of overparameterized models, we develop Orthogonal Recursive Fitting (ORFit), an algorithm for one-pass learning which seeks to perfectly fit every new datapoint while changing the parameters in a direction that causes the least change to the predictions on previous datapoints. By doing so, we bridge two seemingly distinct algorithms in adaptive filtering and machine learning, namely the recursive least-squares (RLS) algorithm and orthogonal gradient descent (OGD). Our algorithm uses the memory efficiently by exploiting the structure of the streaming data via an incremental principal component analysis (IPCA). Further, we show that, for overparameterized linear models, the parameter vector obtained by our algorithm is what stochastic gradient descent (SGD) would converge to in the standard multi-pass setting. Finally, we generalize the results to the nonlinear setting for highly overparameterized models, relevant for deep learning. Our experiments show the effectiveness of the proposed method compared to the baselines.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Precision Needle for Injection of Fluid into the Suprachoroidal Space of the Eye for the Treatment of Retinal Detachment</title>
<link href="https://hdl.handle.net/1721.1/158203" rel="alternate"/>
<author>
<name>Rutherford, Emma</name>
</author>
<id>https://hdl.handle.net/1721.1/158203</id>
<updated>2025-04-08T04:40:34Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Design of a Precision Needle for Injection of Fluid into the Suprachoroidal Space of the Eye for the Treatment of Retinal Detachment
Rutherford, Emma
Rhegmatogenous retinal detachment (RRD) is a vision-threatening condition that affects 10 to 18 per 100,000 people in the United States annually [1]. The current standard for treatment is pars plana vitrectomy (PPV), which is an invasive and expensive surgical procedure that leaves patients unable to perform usual activities for four to six weeks. In addition, current methods tend to produce distorted vision upon recovery. In-office Suprachoroidal Viscopexy™ (SCVEXY™) is a minimally invasive technique recently developed by Dr. Rajeev Muni for treating rhegmatogenous retinal detachment (RRD) which has been performed on a handful of people [2]. This procedure has the potential to greatly reduce the cost and recovery time of RRD while also improving the quality of the repair. It can be performed with no incision, no tamponade agent, and no patient post-op positioning requirements [2]. SCVEXY works by injecting viscous fluid into the suprachoroidal space, a “potential space” between the sclera and choroid, creating a “bleb” of fluid underneath the tear that pushes the choroid towards the retina and allows it to reattach. However, difficulty in safely injecting into this space at the location of the retinal tear currently limits the widespread utilization of the technique. If this procedure was made reliably safe, it could greatly change how retinal detachments are treated and improve patient outcomes. The primary difficulty arises in precisely locating the suprachoroidal space in order to inject the viscous fluid. The thickness of the sclera varies from patient to patient and between locations on the eye. Additionally, the scleral and choroidal tissues are very thin, leaving little room for positional error. Hemorrhage may occur if the needle punctures through the choroid and into the subretinal space, which could lead to bad outcomes. This work presents a device developed to minimally invasively reach posterior segments of the eye, deploy an injection needle in-situ with high resolution, sense when the needle tip has passed into the suprachoroidal space (SCS), and inject a viscous fluid. Not only will this device be used to treat retinal detachment in a minimally invasive manner, but it could also be used for drug injection or fluid aspiration via the suprachoroidal and subretinal spaces for treatment of a variety of posterior ocular diseases.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonrigid single-axis space integrator dynamics</title>
<link href="https://hdl.handle.net/1721.1/158115" rel="alternate"/>
<author>
<name>Shaw, Edward Eugene.</name>
</author>
<id>https://hdl.handle.net/1721.1/158115</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Nonrigid single-axis space integrator dynamics
Shaw, Edward Eugene.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1964; Includes bibliographical references (leaves 63-64).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stratospheric radiance</title>
<link href="https://hdl.handle.net/1721.1/158113" rel="alternate"/>
<author>
<name>Schweickart, Rusty.</name>
</author>
<id>https://hdl.handle.net/1721.1/158113</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Stratospheric radiance
Schweickart, Rusty.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1963; Includes bibliographical references (leaves 68-70).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A comparison of the existing methods of studying the stability of earth slopes</title>
<link href="https://hdl.handle.net/1721.1/158111" rel="alternate"/>
<author>
<name>La Casta-Sanchez, Salvador.</name>
</author>
<id>https://hdl.handle.net/1721.1/158111</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1959-01-01T00:00:00Z</published>
<summary type="text">A comparison of the existing methods of studying the stability of earth slopes
La Casta-Sanchez, Salvador.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1959; Includes bibliographical references (leaf 15).
</summary>
<dc:date>1959-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning models</title>
<link href="https://hdl.handle.net/1721.1/158103" rel="alternate"/>
<author>
<name>Crooks, Lawrence.</name>
</author>
<id>https://hdl.handle.net/1721.1/158103</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Planning models
Crooks, Lawrence.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1982; Bibliography: leaves 121-127.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An econometric/engineering model of United States demand for semi-fabricated copper products disaggregated by shape and end-use sector : and an econometric/engineering model of world demand for semi-fabricated copper products disaggregated by major consuming area</title>
<link href="https://hdl.handle.net/1721.1/158101" rel="alternate"/>
<author>
<name>Cummings, Mary Rowena.</name>
</author>
<id>https://hdl.handle.net/1721.1/158101</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">An econometric/engineering model of United States demand for semi-fabricated copper products disaggregated by shape and end-use sector : and an econometric/engineering model of world demand for semi-fabricated copper products disaggregated by major consuming area
Cummings, Mary Rowena.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1982; Bibliography: leaves 174-177.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Levels, layers, and planes : the framework of a system of knowledge representation semantics</title>
<link href="https://hdl.handle.net/1721.1/157977" rel="alternate"/>
<author>
<name>Smith, Brian Cantwell.</name>
</author>
<id>https://hdl.handle.net/1721.1/157977</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Levels, layers, and planes : the framework of a system of knowledge representation semantics
Smith, Brian Cantwell.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Bibliography: leaves 199-203.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decision making for energy conservation in existing commercial buildings.</title>
<link href="https://hdl.handle.net/1721.1/157973" rel="alternate"/>
<author>
<name>Chertow, Richard Philip.</name>
</author>
<id>https://hdl.handle.net/1721.1/157973</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Decision making for energy conservation in existing commercial buildings.
Chertow, Richard Philip.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Includes bibliograhical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conical flow modeling for polygonal cross section bodies at off design conditions.</title>
<link href="https://hdl.handle.net/1721.1/157972" rel="alternate"/>
<author>
<name>Kamkar, Hamid.</name>
</author>
<id>https://hdl.handle.net/1721.1/157972</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Conical flow modeling for polygonal cross section bodies at off design conditions.
Kamkar, Hamid.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clinical Cost-Effectiveness as a Novel Metric for Steering Emerging Medical Technology</title>
<link href="https://hdl.handle.net/1721.1/157969" rel="alternate"/>
<author>
<name>Richards, Daniel Herndon</name>
</author>
<id>https://hdl.handle.net/1721.1/157969</id>
<updated>2025-04-08T04:08:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Clinical Cost-Effectiveness as a Novel Metric for Steering Emerging Medical Technology
Richards, Daniel Herndon
Background: Steering an emerging medical technology involves making decisions under uncertainty. Localized drug delivery (LDD) is an emerging medical technology that may be useful in treating epilepsy, which is burdensome and difficult to clinically manage. Costeffectiveness analysis (CEA) is a model-based, problem-oriented framework for determining whether a treatment should be prescribed and reimbursed, though it is typically used to compare treatment alternatives that are already clinically available. Two research questions were posed: How can a clinical CEA be constructed for an emerging medical technology to enhance its steering? And, under what conditions would an emerging technology, LDD, be prescribed in place of resective surgery for drug-resistant epilepsy? Methods: A CEA was constructed with the clinical decision point defined as pediatric patients with drug-resistant epilepsy of focal origin. A new treatment alternative, LDD, was proposed as a solution-neutral, generalized concept, and technological factors were posited that influence parameters in the CEA. A one-way sensitivity analysis was conducted to verify the model and observe its most sensitive parameters. A probabilistic sensitivity analysis was conducted to observe P10 and P90 values for clinical effectiveness. Results: The most sensitive driver of incremental effectiveness of LDD over surgery was, per the model, the potential of LDD to reduce systemic side effects. The potential clinical benefit of LDD over surgery was estimated, probabilistically, as between P10 and P90 values of 0.081 and 0.339 QALYs, respectively. Limitations of the model were discussed. A ‘utopia point’ was calculated. The relationship of the CEA to a total addressable market (TAM) calculation was discussed. The CEA modeling process enhanced learning about the problem and solution spaces. Conclusions: Despite its limitations, CEA modeling can enhance steering activities for emerging medical technologies. Insights from CEA may also help to assess trade-offs in capabilities and cost, as well as observe trends in clinical performance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expanding the structural diversity of discrete polymers accessible through iterative exponential growth</title>
<link href="https://hdl.handle.net/1721.1/157967" rel="alternate"/>
<author>
<name>Khokhlov, Khrystofor</name>
</author>
<id>https://hdl.handle.net/1721.1/157967</id>
<updated>2025-04-08T04:07:46Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Expanding the structural diversity of discrete polymers accessible through iterative exponential growth
Khokhlov, Khrystofor
Iterative exponential growth is a powerful method for the synthesis of atomically defined macromolecules. However, preparation of enantiopure IEG-ready monomers can be challenging, which may limit the attractiveness of IEG as a tool for the study of structurerelationship properties in discrete macromolecules, both in materials and in biological systems. Here, we present a new strategy for the synthesis of orthogonally protected monomers, suitable for IEG through cycles of azidation, alkyne deprotection, and CuAAC, in fewer steps and from readily available and affordable building blocks. This monomer synthesis was achieved through the development of a novel allylation methodology. Using alkynylation of epichlorohydrin, LiBr Finkelstein, and TfOH-promoted allylation, we have been able to prepare a monomer for 3A (number of carbons in each polymer repeat unit, excluding alkyne) IEG in just three steps. Furthermore, the same reactions can be integrated in the synthesis of other IEG architectures (2A/4A/5A), thus expanding the structural diversity and readily accessible substrate scope for atomically defined macromolecules. The configurations of stereogenic centers in IEG-mer backbones are defined by the starting material (R or S epichlorohydrin) and can be further controlled by combining different stereoisomers in desired fashion. This work outlines a conceptual strategy to diversify and expand the chemical space of discrete macromolecules and enable efficient and quick access to a variety of IEG-mer scaffolds.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Abrupt change of load on a synchronous machine</title>
<link href="https://hdl.handle.net/1721.1/157909" rel="alternate"/>
<author>
<name>Edgerton, Harold E.
            (Harold Eugene),
            1903-1990.</name>
</author>
<id>https://hdl.handle.net/1721.1/157909</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1927-01-01T00:00:00Z</published>
<summary type="text">Abrupt change of load on a synchronous machine
Edgerton, Harold E.
            (Harold Eugene),
            1903-1990.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1927; Includes bibliographical references (leaves [102]-[103]).
</summary>
<dc:date>1927-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Void formation in copper and selenium ion irradiated molybdenum.</title>
<link href="https://hdl.handle.net/1721.1/157908" rel="alternate"/>
<author>
<name>Chernock, Richard Steven.</name>
</author>
<id>https://hdl.handle.net/1721.1/157908</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Void formation in copper and selenium ion irradiated molybdenum.
Chernock, Richard Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Consolidation circuit for an MHD channel.</title>
<link href="https://hdl.handle.net/1721.1/157907" rel="alternate"/>
<author>
<name>Cheng, Rowley Lop Wah.</name>
</author>
<id>https://hdl.handle.net/1721.1/157907</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Consolidation circuit for an MHD channel.
Cheng, Rowley Lop Wah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A fast approximate solution to the electrical power generation rescheduling and load shedding problem</title>
<link href="https://hdl.handle.net/1721.1/157906" rel="alternate"/>
<author>
<name>Chan, Sherman Man.</name>
</author>
<id>https://hdl.handle.net/1721.1/157906</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">A fast approximate solution to the electrical power generation rescheduling and load shedding problem
Chan, Sherman Man.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coal nitrogen conversion to NO [subscript x] during simultaneous oxidation and pyrolysis.</title>
<link href="https://hdl.handle.net/1721.1/157905" rel="alternate"/>
<author>
<name>Cheng, Irene Teresa.</name>
</author>
<id>https://hdl.handle.net/1721.1/157905</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Coal nitrogen conversion to NO [subscript x] during simultaneous oxidation and pyrolysis.
Cheng, Irene Teresa.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1978; Bibliography: leaves 127-129.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Figuring the Middle Ground: A Search for Authorship in Perceiving China's COVID-19 Lockdowns</title>
<link href="https://hdl.handle.net/1721.1/157883" rel="alternate"/>
<author>
<name>Zhang, San</name>
</author>
<id>https://hdl.handle.net/1721.1/157883</id>
<updated>2024-12-19T03:33:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Figuring the Middle Ground: A Search for Authorship in Perceiving China's COVID-19 Lockdowns
Zhang, San
Witnessing and attempting to comprehend China’s controversial response to COVID-19 over the past three years from a geographically distant yet culturally and emotionally intimate standpoint, I have grappled with multiple perspectives, sometimes as an insider, sometimes as an outsider, and most of the time as an impostor to both. As I continually query the incoherence of my positionality, I find myself in an obscure middle ground where my voice is filtered as inauthentic and unheeded. I ask myself: What should I do? What can I do?&#13;
&#13;
This project is an effort to give myself a voice in the process of figuring out the “middle ground”—a gradient of unsettled propositions stretching between cultural identities, negotiating with constructed collective memories, and discursively evolving over a three-year-long uncanny journey trying to perceive the COVID-19 lockdowns in China. By accepting the “middle ground” as a valid stance, I was able to devise a set of methods for navigating the complexity of materials gathered at various times and locations. In addition, utilizing architectural representation tools, I curated a collection of works that reproduce the research process and exhibit the processed information.&#13;
&#13;
This endeavor is not intended to rationalize pandemic control. Rather, it cultivates a ground for reflection that deconstructs a dichotomous perception of right or wrong, drawing attention to individual lived experiences that provide a nuanced interpretation of the COVID-19 pandemic as an international health emergency that affected everyone. Although somewhat fuzzy and uneasy, the “middle ground” position indicates the possibility that a personal desire to develop one’s authorship can lead to a means of making sense of a global crisis.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Topics in Marma (မာရမာ)</title>
<link href="https://hdl.handle.net/1721.1/157882" rel="alternate"/>
<author>
<name>Marma, Rani Ukhengching (ဦး ချမ်း စိန် မာရမာ)</name>
</author>
<id>https://hdl.handle.net/1721.1/157882</id>
<updated>2024-12-19T03:32:34Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Topics in Marma (မာရမာ)
Marma, Rani Ukhengching (ဦး ချမ်း စိန် မာရမာ)
Marma¹ an endangered indigenous language of Bangladesh, is spoken by approximately 200,000 Marma individuals residing in Bangladesh’s southern region called the Chittagong Hill Tracts (CHT). Marma language is closely related to Rakhine and Burmese, and many lexical items are almost identical to those in Burmese and Rakhine, “although Marma exhibits a more conservative phonological profile than Burmese in the grammatical particles” Keisuke (2011). This research study analyzed several morphemes and their roles in shaping discourse structures in Marma information structure (topic-focus articulation). Marma has “agglutinative morphology”, meaning words are formed by stringing together morphemes in specific sequences. We observed prefixation, suffixation, and infixation in Marma. We analyzed the multifunctionality of these selective morphemes [“က=ga/ka, ကိ ု =go/ko, စာ=cha,ရာ=ra, ယည်=yi”] within Marma discourse and explored their implications for a better understanding of information structure in Marma language. At the end of this paper, through instrumental analysis, we proposed three tones in Marma (i.High and creaky, ii. low, and iii. falling).&#13;
 &#13;
Key words: Marma, indigenous language, information structure, topic and focus,morphology and tone.&#13;
&#13;
¹“According to Bradley (1985:180), the Marma group would have first migrated from Arakan to&#13;
the Chittagong Hill Tracts by the early sixteenth century and then after the Burmese conquest in&#13;
1785. They live mainly in the Chittagong Hill Tracts where they form one of the main Indigenous&#13;
groups ( Htin, 2015) ”
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Not Function but Function Conquered: Against a Functionalist Theory of Directives</title>
<link href="https://hdl.handle.net/1721.1/157880" rel="alternate"/>
<author>
<name>Hill, John</name>
</author>
<id>https://hdl.handle.net/1721.1/157880</id>
<updated>2024-12-19T03:05:45Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Not Function but Function Conquered: Against a Functionalist Theory of Directives
Hill, John
Ordering, requesting, and inviting are examples of directive speech acts. Philosophers have offered different accounts of what it is to perform a directive, which they have developed using different theoretical resources. Attitudinal theories of speech acts try to explain what it is to perform a directive in terms of a speaker’s beliefs, desires, and intentions. Nonattitudinal theories of speech acts try to explain directives in terms of something else.&#13;
&#13;
This thesis is concerned with functionalism, a nonattitudinal theory of speech acts. According to functionalism, performing a directive is making an utterance with the etiological function of causing hearers to act in response to one’s utterance. I argue that functionalism is false. I develop counterexamples that show functionalism is too permissive about the kinds of causation suitable for generating directives. I argue further that the most plausible way to address these counterexamples is to become more attitudinal: rather than be permissive, functionalism should hold that directives and hearers’ responses to them are caused by specific internal processes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Know Thy Cell-Free DNA: Early Detection of Microsatellite Instability Using Ultra-Low-Pass Cell-Free DNA Sequences</title>
<link href="https://hdl.handle.net/1721.1/157879" rel="alternate"/>
<author>
<name>Lu, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/157879</id>
<updated>2024-12-19T04:35:16Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Know Thy Cell-Free DNA: Early Detection of Microsatellite Instability Using Ultra-Low-Pass Cell-Free DNA Sequences
Lu, Nicole
Microsatellites are short segments of repeated DNA motifs (i.e., base pair patterns) that are widespread in our genomes. Microsatellites are inherently more mutable than other genomic locations, and since cancer cells undergo many more cell divisions, microsatellites are useful for distinguishing tumor DNA from normal (non-cancerous) DNA.&#13;
&#13;
Microsatellite instability (MSI) arises as a result of mismatch repair deficiency (MMRD), wherein a patient loses function of both copies of certain genes related to mismatch repair.&#13;
&#13;
Current MMRD diagnostics rely on deep sequencing of tumor tissue samples, which can be expensive and overly-invasive to perform for early or routine screening. Less expensive sequencing methods such as ultra-low pass (ULP) sequencing exist, but thus far have not been utilized for detection of microsatellite instability. In this thesis, we focus on 0.1× ULP sequences, in which about 10% of the genomic locations have one read in expectation. Having so few reads makes it difficult to differentiate experimental noise from true mutations. Similarly, cell-free DNA (cfDNA) are DNA fragments from cells all over the body, which circulate in the blood. Collecting and sequencing cfDNA is much less invasive than collecting tissue samples, but presents another challenge in that the fraction of DNA fragments from any particular cell (or group of cells) is low. Thus, if cancerous cells exist within the body, its representation in a given cfDNA sample is likely low. Together, these challenges present a obvious trade-off between signal strength and cost/invasiveness for screening and detection of MSI.&#13;
&#13;
This thesis focuses on the implementation, validation, and additional research of a computational tool to detect microsatellite instability in ultra-low pass cell-free DNA samples.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustically Controlled Remotely Operated Undersea Vehicles: A Quantitative Analysis</title>
<link href="https://hdl.handle.net/1721.1/157876" rel="alternate"/>
<author>
<name>Stites, Corwin Wesley</name>
</author>
<id>https://hdl.handle.net/1721.1/157876</id>
<updated>2024-12-19T03:01:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Acoustically Controlled Remotely Operated Undersea Vehicles: A Quantitative Analysis
Stites, Corwin Wesley
This thesis topic stems from a U.S. Navy effort to alter an existing remotely operated vehicle (ROV) system. A vehicle reliant on a tethered connection to an operator requires adaptation to an untethered acoustically controlled vehicle. This project provides a tradespace exploration based in simulation of factors which limit untethered ROV performance. Factors which promote the use of an untethered system over a tethered system are also explored. A MATLAB simulation has been constructed to analyze a hypothetical ROV grid search mission across multiple parameters relating to the vehicle specifications, the mission layout, the acoustic communication system, and the operating environment. This simulation can then be used to generate a wide range of data regarding ROV performance by use of the Monte Carlo method. The performance metrics output by the simulation, along with an automated analytical tool created to process simulation data, provide quantitative insight into the viability of an ROV utilizing an acoustic communication system across a variety of scenarios.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Revealing SEI Formation and Evolution at the Li Anode/Liquid Electrolyte Interface in Li-ion Batteries by in situ Fourier Transform Infrared Spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/157871" rel="alternate"/>
<author>
<name>Wang, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/157871</id>
<updated>2024-12-19T04:14:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Revealing SEI Formation and Evolution at the Li Anode/Liquid Electrolyte Interface in Li-ion Batteries by in situ Fourier Transform Infrared Spectroscopy
Wang, Daniel
A novel in-situ FTIR method is developed to probe the Li anode/liquid electrolyte interface. Three different conventional electrolyte systems were tested: 1.2 M LiPF₆ in EC, 1.0 M LiPF₆ in EMC, and LP57 (1.0 M LiPF₆ in EC:EMC (3/7 vol %)). Using the spectroelectrochemical cell, FTIR measurements for first plating step and cycled cells (up to 50 cycles) were collected to look for new species formation. In the case of 1.2 M LiPF₆ in EC, LEMC formation was observed when the potential was brought below 1.50 VLi. LEMC growth accelerated when the potential was reduced below 0.0 VLi, upon contact with freshly plated Li metal. When 1.0 M LiPF₆ in EMC was used for the same study, either lithium methyl carbonate or lithium ethyl carbonate were formed. Upon switching to LP57, Li₂CO₃ became the dominant SEI component. When the three electrolytes were cycled in the spectroelectrochemical cell, the SEI peaks continued to grow for the first 10 cycles. After the first 10 cycles, LEMC and Li₂CO₃ growth plateaued, indicating SEI stabilization. On the other hand, LRC signal diminished, indicating an unstable SEI formed by EMC. Additionally, anion decomposition was observed to be more pronounced under high concentrations of EC. Since anion decomposition can be used as a proxy for LiF formation, high concentration electrolytes perform better possibly due to larger amounts of LiF formation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Machine Learning to Discover Biochemical Determinants of Physical Fitness</title>
<link href="https://hdl.handle.net/1721.1/157864" rel="alternate"/>
<author>
<name>Nawaz, Hesham</name>
</author>
<id>https://hdl.handle.net/1721.1/157864</id>
<updated>2024-12-19T03:34:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Causal Machine Learning to Discover Biochemical Determinants of Physical Fitness
Nawaz, Hesham
Identifying the key pathways relevant to cardiorespiratory fitness is of great importance for both predicting exercise responsiveness and potentially finding which interventions are likely to affect it. While contemporary deep learning models have demonstrated great success in pattern recognition and generation for various data modalities, their ability to decipher the causal mechanisms underlying these patterns is limited. This work proposes and evaluates a methodology using state-of-the-art causal discovery and causal inference methods to uncover the relationships between different proteins and their impact on changes in individuals’ maximal oxygen consumption (a proxy for physical fitness).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracing the Precursors and Amplifiers of Conflict in the Information Age: An NLP Inquiry of Tensions, Political Communication, and Misinformation</title>
<link href="https://hdl.handle.net/1721.1/157862" rel="alternate"/>
<author>
<name>Zimmer, Philipp</name>
</author>
<id>https://hdl.handle.net/1721.1/157862</id>
<updated>2024-12-19T03:24:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Tracing the Precursors and Amplifiers of Conflict in the Information Age: An NLP Inquiry of Tensions, Political Communication, and Misinformation
Zimmer, Philipp
Violent conflicts, in their varied and complex forms, have long been a subject of research and political discourse. Despite increased attention for the field, various nuances and dynamics are yet to be explored. This thesis seeks to study three aspects of the multifaceted nature of conflicts through the lens of natural language processing (NLP), thereby not only offering new insights but also advancing the field's methodological landscape.&#13;
&#13;
First, the study delves into the identification of causal predictors of conflicts. By showcasing the potential of a frame-semantic parser, I am able to quantify the precursors that contribute to conflict and examine the potential for enhancing prediction models with greater qualitative depth. This chapter utilizes a rich but under-examined data source, news articles, which can aid closing the data gap in conflict studies.&#13;
&#13;
In the second chapter, the communication strategies of political leaders during crises are scrutinized to understand the rationale behind their messaging and the impact thereof. I argue that leaders' engagement frequency and style with their citizens is dependent on the political systems' characteristics and that it matters for societal conceptions.&#13;
&#13;
The final chapter addresses the spread of misinformation, such as in times of crisis, investigating which themes are prone to the widespread propagation on social media and presenting a novel ensemble method for the detection of misleading and false content.&#13;
&#13;
By integrating computational techniques with political theory, this work contributes to a nuanced understanding of conflict dynamics and offers rich potential for anticipatory actions of policymakers.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coevolving Cybersecurity Adversaries for Industrial Control Systems in Failure-Prone Environments</title>
<link href="https://hdl.handle.net/1721.1/157861" rel="alternate"/>
<author>
<name>Wicks, Kathryn</name>
</author>
<id>https://hdl.handle.net/1721.1/157861</id>
<updated>2024-12-19T04:14:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Coevolving Cybersecurity Adversaries for Industrial Control Systems in Failure-Prone Environments
Wicks, Kathryn
As industrial control systems become universally integrated with software and connected to the internet, they have become targets for cyberattacks and sabotage. Detecting cyberattacks on these networks is difficult because existing datasets on attacks is minimal and the bulk of intrusion detection systems are designed for enterprise environments rather than industrial environments. In industrial environments, mechanical failures, stress states, and electrical problems are expected, with repairs included in daily operations. In enterprise environments, such failures are rarer and more high-impact as a result. We investigate the extent to which this mismatch in the impact of physical stressors failures degrades the ability of traditional intrusion detection algorithms to perform in the industrial environment. In the sub-area that this thesis focuses on, power microgrids, such disturbances can come in the form of line-line faults, line-ground faults, lack of generation capacity to meet demand, and unintentional islanding, among many others. Microgrids must be resilient to these events, and this thesis investigates to what extent they are currently and if they can be improved. Specifically, this thesis asks: do traditional IDSs cause false alarms when placed in a failure-prone environment? How do these intrusion detectors perform overall? Can they be improved with additional training? And finally, can intrusion detection systems be tricked by attacks which appear to be "benign" failure modes? This thesis answers these questions by comparing the performance of different anomaly detection methods on cyberattack datasets with varying levels of stressor complexity and severity, and finds that stress on an industrial system can degrade anomaly-based intrusion detector performance. Expanding on this idea, an attacker is then trained to adversarially mask a dataset, and a detector is co-evolved alongside it to detect the attacks. Finally, the coevolution is brought into the hardware-in-theloop simulation environment, where attackers and defenders act in real time to change the state of a realistic microgrid simulation. From these experiments, it is found that attackers can leverage grid disturbances to hide their actions, and that accurate realtime simulations are highly useful for identifying vulnerabilities in a cyberphysical system.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Existence and Analysis of a Rotating Stall Inception Continuum&#13;
&amp; Development of Concept Questions in Fluid Dynamics</title>
<link href="https://hdl.handle.net/1721.1/157834" rel="alternate"/>
<author>
<name>Cherry, Maranda F.</name>
</author>
<id>https://hdl.handle.net/1721.1/157834</id>
<updated>2024-12-12T03:16:06Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Existence and Analysis of a Rotating Stall Inception Continuum&#13;
&amp; Development of Concept Questions in Fluid Dynamics
Cherry, Maranda F.
This thesis presents two projects, an analysis of rotating stall inception for axial compressors in turbomachinery, and a description of the creation of Concept Questions for a text on internal flows. The first part of this thesis identifies flow behavior that defines two routes to rotating stall, known as modal and spike type rotating stall inception. It continues previous studies by MIT and the University of Cambridge surrounding unification of these two stall types under a dynamical system framework. Calculations were carried out for an isolated rotor, with a high hub to tip radius ratio, using TBLOCK, a Reynolds Averaged Navier Stokes solver. The results show (i) the dependence of stall inception on the compressor axisymmetric pressure rise characteristic and the characterization of mode and spike stall inception as two paths, located at the ends of a continuum of possible paths to stall. (ii) the effect of blade passage accelerations and asymmetry in the onset process, and (iii) the divergence of stall inception from two-dimensionality as a function of the slope of the total-to-static compressor pressure rise characteristic. The calculations show that compressor pressure rise characteristic slopes, dψ/dϕ, less than 0.3 have a stall cell growth rate, σ, that agrees with two-dimensional theory. The divergence of stall inception from two-dimensionality is suggested as a distinguishing feature of spike type stall inception compared to modal type stall inception. The second part of this thesis encompasses the creation, editing and compilation of Concept Questions for seven book chapters in a new text that describes the use of Concept Questions in teaching (and learning) fluid mechanics. The composition and qualities of a good concept question are defined, and the process of generating and editing questions for the intended audience is discussed.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tools for Mapping the Links Between Stimuli, AffectiveStates, and Behavior through Whole-Brain Imaging inZebrafish Larvae</title>
<link href="https://hdl.handle.net/1721.1/157832" rel="alternate"/>
<author>
<name>Zhang, Caroline Lige</name>
</author>
<id>https://hdl.handle.net/1721.1/157832</id>
<updated>2024-12-12T03:52:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tools for Mapping the Links Between Stimuli, AffectiveStates, and Behavior through Whole-Brain Imaging inZebrafish Larvae
Zhang, Caroline Lige
Affective states, often referred to as emotional states, exert substantial influence on behavior and decision-making processes. Traditionally, researchers have turned to functional imaging to delve into the neural mechanisms that drive both behavior and decision making. However, functional imaging of behaving animals often focuses on a singular brain region. Whole-brain imaging, on the other hand, has the capacity to significantly advance our understanding of the brain's functional architecture. In this pursuit, zebrafish larvae emerge as an ideal model for whole-brain imaging due to their transparency, small size, genetic manipulability, rapid development, high reproducibility, Recent advances in protein engineering and fluorescence microscopy have empowered researchers to observe neural activity across extensive neuronal populations. Genetically Encoded Calcium Indicators (GECIs) and Genetically Encoded Voltage Indicators (GEVIs) provide the means to probe brain dynamics with single-cell precision. The advent of lightsheet microscopy technologies has further enriched our capabilities, enabling the recording of brain activity at remarkable frame rates, ranging from several hundred to several thousand frames per second, all while the animal is exposed to precise visual, auditory, and/or olfactory stimulation. Leveraging these experimental advancements in conjunction with machine learning and computer vision techniques, our study aims to forge connections between stimulation, neural activity, and behavior through a larval zebrafish model.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Tip Clearance and Surface Roughness on Small-Scale Turbopump Impeller Performance</title>
<link href="https://hdl.handle.net/1721.1/157828" rel="alternate"/>
<author>
<name>Ruecker, Kinjal A. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/157828</id>
<updated>2024-12-12T03:43:09Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Effects of Tip Clearance and Surface Roughness on Small-Scale Turbopump Impeller Performance
Ruecker, Kinjal A. L.
Centimeter-scale turbopump impellers typically used in liquid rocket engines of small launch vehicles suffer from reduced performance due to manufacturing challenges and nonuniform geometric scaling. This thesis aims to characterize the impact of impeller blade tip clearance and surface roughness on the performance of small-scale turbopump impellers by assessing the dominant flow features, quantifying the underlying loss mechanisms, and determining the sensitivity of performance losses to changes in tip clearance and surface roughness. The study identifies the primary flow features governing impeller performance to be blade tip leakage flow and secondary flow. The analysis identified two distinct flow regimes based on tip clearance: above 5% of tip clearance, the losses are predominantly due to blade tip leakage flow, whereas below this threshold, losses are governed by both secondary flow and blade tip leakage flow. For tip clearances above 5% of the blade span, blade tip leakage flow is estimated to contribute more than 80% of total impeller loss. A 1% change in tip clearance is estimated to result in a 0.8% loss in efficiency. The calculations suggest increasing surface roughness reduces the effective tip clearance due to increased viscous effects in the tip gap, but strengthens the secondary flow. This lowers the effective tip clearance that separates the flow regimes. The contribution of blade tip leakage loss to total impeller loss decreases by up to 22% for surface roughness increased from an Rₐ value of 1 µm to 10 µm. The strengthened secondary flow at higher surface roughness increases mixing of the blade tip leakage flow with the blade passage flow, leading to larger regions of blockage. Increasing the surface roughness from an Rₐ value of 1 µm to 10 µm results in a 4% loss in impeller efficiency. This study demonstrates that surface roughness is more impactful on small-scale impeller performance than blade tip clearance, and so manufacturing for smooth surfaces should be prioritized over reducing the blade tip clearance gap.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Authenticity in the Workplace:  What does it really mean?</title>
<link href="https://hdl.handle.net/1721.1/157826" rel="alternate"/>
<author>
<name>Pervaaz, Viquar A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157826</id>
<updated>2024-12-12T03:09:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Authenticity in the Workplace:  What does it really mean?
Pervaaz, Viquar A.
Recently, the word authenticity has been used quite prevalently in organizations, specifically as an attribute needed in leaders.  However, during the pandemic, the use of the word authenticity became more prominent and organizationally universal. While the term is great in concept, the power of the word “authenticity” remains nebulous.  This poses a potential problem for organizations and teams as it presents the risk of not delivering on this commitment if the elements of authenticity are not defined and understood.  Making a promise of authenticity without delivering on it may have a negative impact on the individual and organizational morale/culture and a longer-ranging impact in terms of employee engagement and retention.  Using the lens of the cognitive dissonance theory as a construct to view authenticity as a “product” from a marketing perspective, one has a framework to postulate that if expectations are not clear and the perceived performance (delivery on the promise of specific elements of authenticity) is not optimal, then there will be ramifications of this in terms of satisfaction (e.g. employee engagement).  This paper will explore why defining this word in an organizational context is important, what are the macro dimensions of authenticity to help frame and define it, and what variables contribute to bringing authenticity to life.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limits to extreme event forecasting in chaotic systems</title>
<link href="https://hdl.handle.net/1721.1/157825" rel="alternate"/>
<author>
<name>Yuan, Yuan</name>
</author>
<id>https://hdl.handle.net/1721.1/157825</id>
<updated>2024-12-12T03:24:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Limits to extreme event forecasting in chaotic systems
Yuan, Yuan
Predicting extreme events in chaotic systems, characterized by rare but intensely fluctuating properties, is of great importance due to their impact on the performance and reliability of a wide range of systems. Some examples include weather forecasting, traffic management, power grid operations, and financial market analysis, to name a few. Methods of increasing sophistication have been developed to forecast events in these systems. However, the boundaries that define the maximum accuracy of forecasting tools are still largely unexplored from a theoretical standpoint. Here, we address the question: What is the minimum possible error in the prediction of extreme events in complex, chaotic systems? We derive the minimum probability of error in extreme event forecasting along with its information-theoretic lower and upper bounds. These bounds are universal for a given problem, in that they hold regardless of the modeling approach for extreme event prediction: from traditional linear regressions to sophisticated neural network models. The limits in predictability are obtained from the cost-sensitive Fano’s and Hellman’s inequalities using the Rényi entropy. The results are also connected to Takens’ embedding theorem using the information can’t hurt inequality. Finally, the probability of error for a forecasting model is decomposed into three sources: uncertainty in the initial conditions, hidden variables, and suboptimal modeling assumptions. The latter allows us to assess whether prediction models are operating near their maximum theoretical performance or if further improvements are possible. The bounds are applied to the prediction of extreme events in the Rössler system and the Kolmogorov flow.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Urban Building Energy Modeling</title>
<link href="https://hdl.handle.net/1721.1/157824" rel="alternate"/>
<author>
<name>Le Hong, Zoe</name>
</author>
<author>
<name>Wolk, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/157824</id>
<updated>2024-12-12T03:55:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Accelerating Urban Building Energy Modeling
Le Hong, Zoe; Wolk, Samuel
Enabling data-driven decision-making in the built environment is critical to achieving ambitious and urgent decarbonization goals. In the building sector, urban building energy models (UBEMs) have become a valuable tool for jurisdictions to develop evidence-based retrofitting policies, but dynamically exploring solutions is hampered by the computational expense and organizational overhead of physics-based building energy models. In order to address these challenges, we present a fast, flexible, and comprehensive UBEM methodology which can be used to reduce identified barriers to time-sensitive decision-making in building stock decarbonization spheres. The methodology combines the speed of current data-driven approaches with the flexibility of computationally intensive, but accurate, engineering models. Identifying machine learning methods as a viable approach, we implement convolutional neural networks (CNNs) which embed timeseries from hourly weather data and building schedules; the embeddings are then combined with static building characteristics and projected to monthly heating and cooling loads. The proposed approach allows for programmatic flexibility and robustness to unique hourly weather conditions globally, while contextual abstraction enables geometric independence. A dataset of over 1 million detailed thermodynamics-based simulations was constructed to train and validate the surrogate model. Model results at the individual shoebox, building, and urban scales compare favorably to traditional numerical methods and meet accepted error bounds under national energy simulation standards.  Additional validation at the urban- and national-scales are performed using public building simulation datasets.  We then demonstrate expanded applications, which leverage the reduced computational cost of the framework to make traditionally infeasible analysis modes tractable and deployable. The methodology presented is intended to be utilized for both very-large-scale systematic analysis and near-real-time interactive explorations. In developing this framework, we aim to provide new mechanisms for key stakeholders in the decarbonization effort to quickly generate actionable insights and engage in iterative discussions to develop evidence-based policy across global building stocks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relative Robot Localization and Frame Alignment for Multi-Robot Collaboration</title>
<link href="https://hdl.handle.net/1721.1/157823" rel="alternate"/>
<author>
<name>Peterson, Mason B.</name>
</author>
<id>https://hdl.handle.net/1721.1/157823</id>
<updated>2024-12-12T04:11:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Relative Robot Localization and Frame Alignment for Multi-Robot Collaboration
Peterson, Mason B.
The growing field of collaborative robotics has the potential to enable and improve the execution of many challenging robot applications. For instance, with teamwork between multiple agents, dynamic object tracking can more completely cover an environment and trajectory planning becomes safer. However, for robots to share the quickly changing spatial information involved in these tasks, robots need to be able to express information originally sensed or planned in their own frame into the frame of neighboring agents. This can be challenging in cases where robots have no global pose information resulting in steady accumulation of error, or drift, in their local pose estimates. To mitigate the effects of drift, neighboring agents must make up-to-date estimates of the alignment between their frames, which can be difficult due to ambiguous alignments and the presence of outlier measurements. To address these issues, the first contribution of this thesis is a method for performing fast incremental frame alignment between pairs of robots, enabling collaborative multiple object tracking (MOT), the task of monitoring the locations of dynamic objects in an environment. To perform frame alignment, robots build up maps of recently seen static objects and use these maps and the detections of tracked dynamic objects to correct for frame drift. Using frame alignment estimates, agents share object detection information and account for additional uncertainty associated with the alignment estimate. The second contribution of this thesis presents a method to perform frame alignment with no initial guess. Many potential frame alignments are computed and we develop a filter that uses temporal consistency to reject outlier alignments and only accept a series of alignments that are consistent over time. We demonstrate in hardware experiments our ability to perform frame alignment in difficult scenarios and improve the quality of collaborative object tracking onboard real robots.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Database and Application Programming Interface Development for Rotational Spectroscopy</title>
<link href="https://hdl.handle.net/1721.1/157820" rel="alternate"/>
<author>
<name>Cheung, Jasmine So Yee</name>
</author>
<id>https://hdl.handle.net/1721.1/157820</id>
<updated>2024-12-12T03:38:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Database and Application Programming Interface Development for Rotational Spectroscopy
Cheung, Jasmine So Yee
The Species-agnostic Automated Gas Analyzer (SAAGA) project aims to automate the detection and characterization of chemical compounds in a complex chemical mixture in the gas phase through experimental rotational spectroscopy and&#13;
computational tools. A database of spectroscopic data serves as the foundation of the automation pipeline for assigning&#13;
spectral lines to species. While there are existing databases available for use, we developed our custom database, named&#13;
SAAGAdb, and an application programming interface (API) to access the database to fulfill the needs of SAAGA.&#13;
SAAGAdb is designed to store structured, high quality spectroscopic data of all species not limited to astrochemically&#13;
relevant ones, enabling convenient data manipulation, integration into future automation pipelines, deployment, and&#13;
maintenance. We implemented software development best practices, including software development life cycle, continuous&#13;
integration/continuous delivery, and version control, to develop a PostgreSQL database with a Python API built on Django&#13;
with RDKit integration. The product passed all unit tests and was successfully seeded with data. With the flexibility&#13;
provided by the Django framework as well as detailed documentation of the software, SAAGAdb and its API can be easily improved and expanded in the future to suit the needs of the SAAGA project.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A GPU-Enabled Building Block Flow Model for Computational Fluid Dynamics</title>
<link href="https://hdl.handle.net/1721.1/157810" rel="alternate"/>
<author>
<name>Costa, Samuel Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/157810</id>
<updated>2024-12-12T03:04:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A GPU-Enabled Building Block Flow Model for Computational Fluid Dynamics
Costa, Samuel Thomas
Computational Fluid Dynamics (CFD) is an key tool in the design of aircraft, allowing engineers to predict the performance of a configuration without having to conduct expensive physical tests. However, in order to move to a greater reliance on CFD, the industry requires a high level of accuracy and fast turnaround time, which current methods cannot deliver. In recent years, the rapid development of the GPU industry has led to an explosion of computational power with the GPU architecture. This has allowed wall-modeled large eddy simulation (WMLES), a higher fidelity simulation technique, to become practical for industry use. WMLES requires the use of both a sub-grid scale (SGS) model and a wall model in order to close the system of equations for integration. Although WMLES delivers an improvement over previous methods, classical SGS and wall models do not deliver the accuracy required by the aviation industry. To help close this gap, we introduce a GPU-compatible version of the Building-Block Flow Model (BFM), a machine learning based unified sub-grid scale and wall model for LES introduced in [1]. In this thesis, we discuss the implementation of the BFM for GPU, timing of the BFM versus other closure models for WMLES, and a variety of tests with the BFM designed to evaluate its performance, and possible avenues of improvement.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Relationship between synoptic scale meteorology, aircraft&#13;
parameters, and observable contrails</title>
<link href="https://hdl.handle.net/1721.1/157807" rel="alternate"/>
<author>
<name>Barbosa, Maria Paula</name>
</author>
<id>https://hdl.handle.net/1721.1/157807</id>
<updated>2024-12-12T03:44:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Relationship between synoptic scale meteorology, aircraft&#13;
parameters, and observable contrails
Barbosa, Maria Paula
Long-lasting or "persistent" contrails are line-shaped clouds that form when airplanes fly through cold and humid parts of the atmosphere that are ice-supersaturated. Various studies have shown that persistent contrails may be responsible for more than half of aviation’s radiative forcing [1]. Efforts to mitigate persistent contrail formation include operational contrail avoidance. Current research suggests that minor (∼ 2000 ft) deviations in altitude of flights during cruise, in conjunction with advancing engine technologies, have the potential to reduce contrail climate forcing by approximately 90% [2]. Identifying and attributing observed contrails to specific individual flights is necessary to demonstrate the success of flight deviations. Reliable flight attribution, therefore, is critical in verifying large-scale implementation of contrail avoidance strategies. Flight attribution leverages both Earth-observation methods, such as satellite images and weather data, and flight data. However, temporal and spatial "blindspots" in satellite instruments, coupled with uncertainties in wind fields, have hindered reliable flight attribution. In this work, we consider eight different probabilistic flight attribution algorithms. All algorithms rely on the use of "similarity measures" which we define as the differences in distance, heading, and altitude between a contrail and flight line segment candidates. We define two-dimensional (2D) algorithms as those that use only distance and heading difference measures and the ones that additionally include altitude as three-dimensional (3D) algorithms. The probabilistic aspect of all eight algorithms is intended to account for errors in wind data and relies on the calculation of a Gaussian probability density function for each similarity measure. In an attempt to mitigate wind and positional errors that compound over time, four of the algorithms feature the inclusion of contrails from previous timestamps as potential match candidates. To account for the changes in flight path due to temporal factors, four of the algorithms include the use of time-dependent Gaussian parameters. The inputs to all algorithms include contrail detections, weather data, and flight data. To perform this analysis, a dataset of 180 manually-attributed, unique contrails was created that captures regional (across the continental United States) and diurnal variation. Each contrail was tracked for part of its lifetime, which results in the generation of 1980 total attributions. These attributions were created by seven labelers, with some overlapping scenes. A parameter sweep was performed on the four 2D algorithms to determine locally optimal Gaussian parameters. This sweep was performed on a reduced dataset that consists of 32 unique contrails and 218 total labels. The results of this sweep show that the performance of the algorithms, when using optimal Gaussian parameters, range from 79.7% to 83.6% accuracy. Accuracy is defined as the percentage of contrails that were attributed to the correct flights. These results are solely for the 2D algorithms that were analyzed on the reduced dataset. We then applied the "locally" optimal Gaussian parameters from the four 2D algorithms to the respective 3D algorithms and ran all eight algorithms on the remaining 148 contrails (1762 labels). We find that the optimal performance for all eight algorithms ranges from 68.2% to 76.2%. A deeper analysis is also conducted to evaluate the scene conditions that affect algorithm performance.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Site-Selective Anion Exchange in a Palladophosphorane</title>
<link href="https://hdl.handle.net/1721.1/157805" rel="alternate"/>
<author>
<name>Khuichad, Nichakan</name>
</author>
<id>https://hdl.handle.net/1721.1/157805</id>
<updated>2024-12-12T03:30:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Site-Selective Anion Exchange in a Palladophosphorane
Khuichad, Nichakan
Reported here are studies on the chemoselective ligand substitution at a palladophosphorane possessing two potential sites of chloride substitution. Ligation of palladium(II) chloride with a tridentate chelating ligand (L, P(N(o-N(2-pyridyl)C₆H₄)₂) results in formation of a complex comprising a d⁸ square planar palladium center supported by a geometrically constrained chlorophosphorane (PdClL superscript Cl). The complex&#13;
thus formed was studied for ligand substitution reactions of the chloro ligand at Pd and P, respectively. Treatment with phenol resulted in substitution of the chloride at the P center while the chloride of the Pd stayed intact, giving complex PdClL superscript OPh. Relatedly, treatment with AgF provided a compound whose NMR spectra are consistent with formation a P–F containing pallado-phosphorane PdClL superscript F. However, attempt to recrystallize the fluoride complex resulted in a formation of a cationic complex with a fluoride-bridged species instead although the fluoride still resided between the two phosphorus centers. Overall, substitution experiments of this palladophosphorane indicated a preference for P–Cl substitution over Pd–Cl. The driving force for the favor toward the exchange at phosphorus has not been extensively explored, but hypotheses have been made which may entail the concept of hard-soft acid-base chemistry and the strength of the bonds involved.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The development of the side-rod locomotive</title>
<link href="https://hdl.handle.net/1721.1/157771" rel="alternate"/>
<author>
<name>Voelcker, J. Westgarth.</name>
</author>
<id>https://hdl.handle.net/1721.1/157771</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1923-01-01T00:00:00Z</published>
<summary type="text">The development of the side-rod locomotive
Voelcker, J. Westgarth.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1923; Includes bibliographical references (leaf [86]).
</summary>
<dc:date>1923-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical evaluation and correlation of tool-life data</title>
<link href="https://hdl.handle.net/1721.1/157770" rel="alternate"/>
<author>
<name>Colding, Bertil N.</name>
</author>
<id>https://hdl.handle.net/1721.1/157770</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">Critical evaluation and correlation of tool-life data
Colding, Bertil N.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957; Bibliography: leaves 46-47.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A case study of two autopilot design methodologies : linear quadratic and H-infinity for a tail controlled missile</title>
<link href="https://hdl.handle.net/1721.1/157769" rel="alternate"/>
<author>
<name>Edeburn, Mark Anthony.</name>
</author>
<id>https://hdl.handle.net/1721.1/157769</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">A case study of two autopilot design methodologies : linear quadratic and H-infinity for a tail controlled missile
Edeburn, Mark Anthony.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1993; Includes bibliographical references.
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimal pricing for peak loads and joint production : theory and applications to diverse conditions.</title>
<link href="https://hdl.handle.net/1721.1/157768" rel="alternate"/>
<author>
<name>Chernick, Paul Lee.</name>
</author>
<id>https://hdl.handle.net/1721.1/157768</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Optimal pricing for peak loads and joint production : theory and applications to diverse conditions.
Chernick, Paul Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography: leaves 222-234.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toeplitz operators</title>
<link href="https://hdl.handle.net/1721.1/157766" rel="alternate"/>
<author>
<name>Gencarelli, Frank Thomas.</name>
</author>
<id>https://hdl.handle.net/1721.1/157766</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Toeplitz operators
Gencarelli, Frank Thomas.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1977; Bibliography : leaf 45.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tomorrow's Typography</title>
<link href="https://hdl.handle.net/1721.1/157735" rel="alternate"/>
<author>
<name>van de Seyp, Vera</name>
</author>
<id>https://hdl.handle.net/1721.1/157735</id>
<updated>2024-12-03T03:50:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Tomorrow's Typography
van de Seyp, Vera
This thesis is an exploration for new tools for typography that investigates how emerging (AI) technologies can contribute to the type design practice in a meaningful way. I created computational design experiments focusing on three areas: (A) design automation, (B) interfacing, and (C) creative exploration. A lot of care has been put in understanding the current scene through expert interviews, workshops, talks and surveys. With pose estimation, generative visual AI, and large language models that operate on text, I explore whether typographic shapes can be created and manipulated with different modes of expression, in a playful, intuitive and collaborative way.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Single Bio-molecule Detector Based on CMOS&#13;
Nanofluidic Platform</title>
<link href="https://hdl.handle.net/1721.1/157733" rel="alternate"/>
<author>
<name>Zikrallah, Ahmed S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157733</id>
<updated>2024-12-03T03:46:49Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Towards a Single Bio-molecule Detector Based on CMOS&#13;
Nanofluidic Platform
Zikrallah, Ahmed S.
Cytokines secretion is a core component of the function of many cell therapy products: It affects the tissue repair capacity of induced Pluripotent Stem Cells (iPSCs) and Mesenchymal Stem cells (MSCs) and the tumorigenicity of Chimeric Antigen Receptor (CAR) T-cell therapies. Ideally, we would be able to continuously monitor the secretome of these cell therapies as they are transformed and expanded in manufacturing.However, state-of-theart techniques for monitoring typically low concentrations of cytokines require either Mass Spectroscopy (MS) or immunoassays like Enzyme-linked Immunosorbent Assay (ELISA). We propose the use of CMOS technology to build a proteomic platform with a single biomolecule resolution. A prototype chip has been designed and fabricated using standard foundary process incorporating a new implementation of a Solid State Nanopore (SSN) of size 55nm×162nm×100nm (w×l×h) with nanofluidic access channels that bridge the buffer solution between the assay space in the packaging structure – a poly carbonate/Polydimethylsiloxane (PDMS) package- and the nanopore on the chip. A silicon Single Photon Avalanche Detectors (SPADs) was also implemented and placed near the nanochannels to utilize fluorescence labeling imaging techniques. In addition, a read-out amplifier that achieves a midband gain of 36.2 dB at a 3 dB bandwidth of 0.1-3.6 MHz is also implemented on the same silicon die, paving the way to superior performance compared to ionic current read-out systems used earlier for electrical biomolecule detection, thanks to low parasitics as a result of integration. The aforementioned modalities integrated on a single chip open the space for the use of CMOS platforms in the electrical and optical interrogation of biomolecules, opening a new horizon for near real-time biomarker assays. The following thesis builds on earlier work that was performed in [1][2] with the objective of expanding on different techniques to interface and characterize the performance of these modalities, especially after post-processing the chips with the aid of tools at MIT.nano. The thesis explores the further deployment of integrated SPAD in a Fluorescence Lifetime Imaging (FLIM) system to image fluorescence-labeled molecules, showcasing the capabilities of the CMOS nanofluidic platform to detect biomarkers such as cytokines.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating cofactor transfer for a B₁₂-dependent enzyme</title>
<link href="https://hdl.handle.net/1721.1/157731" rel="alternate"/>
<author>
<name>Duong, Alexander T.</name>
</author>
<id>https://hdl.handle.net/1721.1/157731</id>
<updated>2024-12-03T03:07:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating cofactor transfer for a B₁₂-dependent enzyme
Duong, Alexander T.
The metallocofactors utilized by enzymes can range in complexity from single metal ions to organometallic cofactors well over 1000 Da. These cofactors enable these metalloenzymes to accomplish a diverse set of unique and challenging chemistry that are critical to core life functions. One of these metallocofactors, adenosylcobalamin (AdoCbl), has only one cognate enzyme in humans: methylmalonyl-CoA mutase (MCM), which is involved in the catabolism of several amino acids, cholesterol, and odd-chain fatty acids. MCM relies on two other proteins, a G-protein metallochaperone called methylmalonic aciduria type A protein (MMAA) and a protein called adenosyltransferase (ATR), to load and off-load cofactor. Mutations or deletions of the gene for MCM, or in any of the genes corresponding to accessory proteins which interfere with cofactor delivery and removal, can lead to a potentially lethal inborn error in metabolism. If the cofactor becomes damaged in the active site of MCM, ATR unloads the cofactor, repairs it, and reloads the regenerated AdoCbl onto the mutase. A molecular understanding of this process has been challenging to obtain due to the difficulty of structurally characterizing a three-protein MCM-MMAA-ATR complex that is transient in nature. An orthologous protein from C. metallidurans in which the G-protein metallochaperone is naturally fused to its target mutase isobutyryl-CoA mutase (IcmF) provides an alternative two-protein IcmF-ATR system for structural and biochemical characterization. Recent work has shown that the IcmF system utilizes a mechanism of active site opening similar to non-fused systems like that of humans. However, the mechanisms by which ATR recognizes the presence of damaged cofactor and then removes it remains unclear. In this thesis, we discuss the development of an assay based on UV-Vis spectroscopy to monitor cofactor transfer between IcmF and ATR. We also discuss efforts to substitute histidine residues in IcmF suspected of serving as intermediate binding sites during cofactor transfer, with the goal of using the developed assay as a means of observing potential changes in transfer efficiency by perturbing these histidine residues. This work seeks to improve our understanding of AdoCbl-dependent enzyme maturation, and inform our ability to harness their unique reactivity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Listening by Synthesizing</title>
<link href="https://hdl.handle.net/1721.1/157728" rel="alternate"/>
<author>
<name>Cherep, Manuel</name>
</author>
<id>https://hdl.handle.net/1721.1/157728</id>
<updated>2024-12-03T03:31:08Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Listening by Synthesizing
Cherep, Manuel
Generative audio models offer a scalable solution for producing a rich variety of sounds. This can be useful for practical tasks, like sound design in music, film, and other media. However, these models overwhelmingly rely on deep neural networks, and their massive complexity hinders our ability to fully leverage them in many scenarios, as they are not easily controllable or interpretable. In this thesis, I propose an alternate approach that relies on a virtual modular synthesizer; a computational model with modules for controlling, generating, and processing sound that connect together to produce diverse sounds. This approach has the advantage of using only a small number of physically-motivated parameters, each of which is intuitively controllable and causally interpretable in terms of its influence on the output sound. This design takes inspiration from devices long used in sound design and combines it with state-of-the-art machine learning techniques. In this thesis, I present three projects that use this formulation. The first is SynthAX, an accelerated virtual modular synthesizer that implements the core computational elements in an accelerated framework. The second, CTAG, combines the synthesizer with an audio-language model into a novel method for text-to-audio synthesis via parameter inference. This method produces more abstract sketch-like sounds that are distinctive, perceived as artistic, and yet similarly identifiable to recent neural audio synthesis models. The third is audio doppelgängers, sounds generated by randomly perturbing the parameters of the synthesizer to create positive pairs for contrastive learning, encompassing more of the variety found in real-world recordings, with controlled variations in timbre, pitch, and temporal envelopes. This method offers an efficient alternative to collecting real-world data, producing robust audio representations that compete with real data on established audio classification benchmarks. This thesis contributes tools for understandably generating rich and diverse sounds, using them and their parameters for sound design and understanding at scale.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Piezoelectric single crystal based one-dimensional phased array for breast tissue imaging</title>
<link href="https://hdl.handle.net/1721.1/157727" rel="alternate"/>
<author>
<name>Du, Wenya</name>
</author>
<id>https://hdl.handle.net/1721.1/157727</id>
<updated>2024-12-03T03:14:20Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Piezoelectric single crystal based one-dimensional phased array for breast tissue imaging
Du, Wenya
Ultrasound is widely used in clinical practice because it is safe, non-invasive, non-ionizing, low-cost, and provides real-time imaging, monitoring, and therapy. However, conventional ultrasound probes are rigid, pressure-required, and operator-dependent. Replacing rigid transducers with conformable ultrasound transducer arrays can allow image acquisition on curved body parts, improve image quality, and enable functions such as long-term monitoring. In this thesis, I propose a conformable ultrasound breast patch (cUSBr-Patch) consisting of a one-dimensional (1D) phased array and a nature-inspired patch design, which offers large-area, deep tissue scanning and multi-angle, repeatable breast imaging while avoiding the drawbacks of conventional ultrasound imaging technologies. I used a Yb/Bi-doped PIN-PMN-PT single crystal as the active element due to its superior piezoelectric properties (d33 = 2,800 pC/N, εr = 7,000, k33 = 0.93). I then fabricated a 1D phased array transducer consisting of 64 elements with an operational frequency of 7.0 MHz. The 1D array exhibits promising acoustic performance with i) a maximum imaging depth of 80 mm, ii) contrast sensitivity of 3 dB, iii) axial/lateral resolutions of 0.25/1.0 mm at 30 mm depth, and iv) a larger field of view than the commercial handheld linear probe at depths of approximately 30 mm or deeper, indicating a potential reliable capability to detect early-stage breast tumors. Beyond this, comprehensive in vitro experimental studies establish that the cUSBr-Patch can provide accurate and reproducible imaging of different phantoms. The clinical trials reveal that the patch exhibits a sufficient contrast resolution (~3 dB) and axial/lateral resolutions of 0.25/1.0 mm at 30 mm depth, allowing the observation of small cysts (~ 0.3 cm) in the breast. This research develops a first-of-its-kind ultrasound technology for breast tissue scanning and imaging which offers a non-invasive method for tracking real-time dynamic changes of soft tissue.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Last-Meter Delivery: Solving the Unattended Delivery Challenge from Streets to Doorsteps</title>
<link href="https://hdl.handle.net/1721.1/157723" rel="alternate"/>
<author>
<name>Xiao, Wen-Xin</name>
</author>
<id>https://hdl.handle.net/1721.1/157723</id>
<updated>2024-12-03T03:04:28Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Last-Meter Delivery: Solving the Unattended Delivery Challenge from Streets to Doorsteps
Xiao, Wen-Xin
The rise of e-commerce has led to a surge in package deliveries, resulting in the proliferation of unattended delivery methods to address the "last-meter" problem – the challenge of delivering packages from the roadside or sidewalk to the customer's front door. This thesis proposes a methodology for implementing Large Language Model (LLM), and Vision Language Model (VLM) to enable delivery robots to identify the final delivery target and navigate the complex terrain from the curb to the front door. The proposed solution aims to enhance the autonomy and safety of last-mile delivery systems, addressing the "last-meter" challenge and improving the customer experience.&#13;
&#13;
This thesis presents a comprehensive overview of the last-meter delivery concept, aiming to bridge the gap between the roadside/sidewalk and the customer's front door. It begins by introducing the significance of last-meter delivery in the growing e-commerce industry and the challenges posed by unattended deliveries. The thesis then reviews the existing literature on autonomous and unmanned delivery systems, multimodal delivery approaches, and the application of large language models and vision language models in robotics. This research identifies the advancements and gaps in the field that the proposed methodology aims to address.&#13;
&#13;
The thesis primarily focuses on leveraging Large Language Models, the Segment Anything Model, and the open-source Florence-2 vision foundation model to enable the transmission of customers' delivery instructions to the final delivery target in the context of last-meter delivery. It outlines the methodology for data preparation, object detection and labeling, as well as the integration of Large Language Models to handle customer instructions and coordinate delivery target. It also describes the experimental design and methodologies employed to validate the effectiveness of the proposed system. This includes the use of a last-meter dataset and the evaluation of last-meter scene and target coordinate identification.&#13;
&#13;
The thesis concludes by summarizing the key findings and contributions, discussing the broader implications of the proposed methodology, and suggesting directions for future work, such as enhancing system robustness and scalability.&#13;
&#13;
KEYWORDS: Last-Mile Delivery, last-meter Delivery, Large Language Models (LLM), Vision Language Models (VLM), Robotics, Segment Anything Model (SAM), Open-Vocabulary&#13;
Object Detection (OVD).
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imagine Yourself: Explorations in Fostering Personal Expression with Generative AI</title>
<link href="https://hdl.handle.net/1721.1/157722" rel="alternate"/>
<author>
<name>Chadha, Karishma</name>
</author>
<id>https://hdl.handle.net/1721.1/157722</id>
<updated>2024-12-03T03:13:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Imagine Yourself: Explorations in Fostering Personal Expression with Generative AI
Chadha, Karishma
Generative Artificial Intelligence (AI) technology has been promoted with many exciting promises to enhance human creativity. However, it has also been shown to amplify human bias and perpetuate harmful stereotypes. In the new age being ushered in by this technology, this thesis explores how educators and designers can use this technology to support young people in exploring and expressing aspects of their unique identities. In particular, I use a design based research methodology to iteratively create Imagine Yourself, a new digital experience adapting off-the-shelf text-to-image generation technology to support young people creating personal representations and stories.&#13;
Imagine Yourself combines OpenAI’s Dall-E 3 image generation technology with Scratch, a rich environment for young people to imagine and create interactive multimedia stories, animations, and more. Guided by a core value of designing for belonging, this project explores how experiences with generative AI can be designed to foster young people’s creative process in creating personally meaningful stories reflecting their own unique identities, experiences, and cultures. I discuss the iterative design process of creating Imagine Yourself in tandem with creative workshops, aiming to support more diverse representation within the image generation output and invite a tinkerable and iterative process of creating. I discuss observations and feedback from creative workshops with young people and adults, creating with Imagine Yourself. Finally, I conclude with reflections on the design process as well as a discussion of challenges, limitations,  opportunities, and open questions for future work incorporating generative AI into young people’s creative learning experiences.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of Finite Element Methods and Satellite InSAR for Monitoring Deformations of a Large Tailings Dam</title>
<link href="https://hdl.handle.net/1721.1/157720" rel="alternate"/>
<author>
<name>Fetell, Robert Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/157720</id>
<updated>2024-12-03T03:48:14Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Comparison of Finite Element Methods and Satellite InSAR for Monitoring Deformations of a Large Tailings Dam
Fetell, Robert Henry
Following the recent catastrophic failure of several mine tailings dams there has been much interest in the use of numerical modeling and remote sensing for monitoring the safety and stability of these structures. This thesis presents a case study that investigates the accuracy of InSAR measurements and the predictive capabilities of finite element models using ground truth surface and sub-surface monitoring data applied to the Zelazny Most (SW Poland) copper tailings storage facility.  This site has a well-documented history of lateral deformations in a critical section (XVIE) of the East dam that have been attributed to a deep-seated translation mechanism of shearing through the underlying Pliocene, glacial clays. Since 2014, operators of the facility have constructed a series of stabilizing berms at this critical section. We investigated the accuracy of InSAR over this period, ending in 2019, by analyzing 186 ascending Sentinel-1 C-band images and 219 descending images using Persistent Scatterer Interferometry and SARProzTM software, comparing results with two surface geodetic benchmarks. Finite element analyses of the structure required a 2D model of section XVIE. We developed and integrated a stratigraphic model for the foundation soils, the complete construction history of the dam (since 1975), and selected input parameters for constitutive models to represent the soil behavior (foundation soils, tailings, dyke and berm materials) using PlaxisTM software. Our results show that InSAR achieves very consistent agreement with geodetic measurements for vertical (Up-Down) and lateral (E-W) surface deformations, over a time period where construction was limited to raising of the dyke near the crest of the dam and berm construction at the toe. The InSAR data are also insightful in showing relatively uniform lateral deformations occurring over the face of the dam, consistent with the interpreted translational failure mechanism. In contrast, it has proved much more challenging to predict subsurface deformations by FE analyses. The computed movements reflect accumulation of deformations over multiple stages of construction and involve shearing through the complex foundation stratigraphy.  We were able to achieve credible estimates of lateral deformations within the range of laboratory shear strength properties published in the literature and using the Hardening Soil (HS) model for non-linear shear stress-strain properties. However, the predictions of surface settlements and lateral deformation are much less reliable and depend on undocumented properties of the tailings, phreatic conditions in the tailings and details of the construction history.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing for Connection with Inner Processes</title>
<link href="https://hdl.handle.net/1721.1/157719" rel="alternate"/>
<author>
<name>Mindel, Jessica Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/157719</id>
<updated>2024-12-03T03:44:33Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Designing for Connection with Inner Processes
Mindel, Jessica Rachel
At a time of division, it is more important than ever that we help each other feel truly understood. Today's online ecosystems offer us many new ways to communicate personal stories, often through fast-paced, reactive channels, but few if any technologies enable us to share what I posit to be a crucial component of how we implicitly understand each other: our inner processes, e.g., how we form our values and identities, navigate unspoken tensions in a community, or feel that something resonates with us.&#13;
&#13;
This thesis explores inner processes as a resource for the design of systems that support human connection, interpersonal understanding, and reflection. Through a series of design iterations, I weigh approaches to eliciting inner processes, choosing media to externally, evocatively represent them, and encouraging perspective-taking behavior by guiding users through each other's inner processes. I approach this topic through three streams of projects, grounded in literatures that outline guidelines for successful perspective-taking and the development of interpersonal closeness, and that assert the value of creative play in surfacing and communicating inner processes, supporting perspective-taking, making room for new social norms, and enabling reframing.&#13;
&#13;
First, I present our collaborative work on Closer Worlds, a two-player, AI-assisted game in which players generate a world they might both want to live in in order to scaffold an emotionally intimate conversation about their memories and shared values. Next, to better understand inner processes entangled with creative practice, I conduct interviews with creative practitioners about the relationships they build through their practice, and design and develop prototypes for implicitly retracing inferred versions of one's own or another person's creative process, capitalizing on room for interpretation. Prototypes include Sjuzet, a compass that anchors the latent space of a user's creative writing to a local map in order to prompt reflection as a user physically wanders through memories, and Pull It Together, a material speculation on textile swatches whose wear and tear modulates to correspond to invisible sociocultural tensions. Finally, I shift my focus to explicitly, informatively trading inner processes in my design of Metaswap, an asynchronous, written activity in which strangers compare annotations about inner processes that arise as they tell personal stories about an uncertainty they are working to resolve in their lives.&#13;
&#13;
Making inner processes explicit and prompting revisitation of them offered both benefits and drawbacks for connection and reflection, but revealed important questions. A mixed-methods analysis across this work presents tensions in the human and machine instinct to make inferences and assumptions about others, and offers opportunities for interpersonally insightful, vulnerable, and trusting conversation when computer-mediated communication and sense-making systems produce deep content rather than deep interactions. Through this work, I hope to lay the foundation for future research on technology's role in supporting interpersonal understanding at a time when so many subjectivities collide and are summarized at the speed of data.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Matters of Illuminance - Transforming Light into Material Artifacts</title>
<link href="https://hdl.handle.net/1721.1/157713" rel="alternate"/>
<author>
<name>Callender III, Dexter</name>
</author>
<id>https://hdl.handle.net/1721.1/157713</id>
<updated>2024-12-03T03:37:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Matters of Illuminance - Transforming Light into Material Artifacts
Callender III, Dexter
This research explores a process to transform light into physical artifacts. It develops a series of custom software systems to capture images of sunlight moving through a building and transform them into three-dimensional forms. It uses digital manufacturing methods to create the three-dimensional forms out of glass. The aim of this work is 1) to construct a methodology for recording light’s interaction with architecture as three-dimensional forms 2) to produce glass sculptures that exist in a fine art setting and contribute to the lineage of 21st century light artists. The academic contribution of this research builds upon the autographic design framework defined by Dietmar Offenhuber. Offenhuber describes the autographic design process as “the practice of shaping the conditions that allow traces to emerge and guiding their interpretation to demonstrate causality and evidence”.1 The technique I use to transform light into three-dimensional forms follows the four steps of the autographic design process. The goal of this technique is to provide a repeatable process and data format that captures information about light’s interaction with architecture at specific locations. The process produces three-dimensional forms, physical glass sculptures, and media that guide their interpretation, which can be interpreted to provide insight on the design and history of the building. The artistic contribution of this research produces glass sculptures that physicalize the shapes of light I observed and recorded at the location. The goal of these sculptures is to create meaningful physical artworks that reflect the nuanced shapes and subtle aesthetic qualities of natural light. Exhibiting the sculptures in spaces that are abundant with natural light creates new interactions between the glass and the light, offering unique visual experiences that change over time. I bolster these artworks with experiential accounts of my time spent in the building. The artwork I produced as part of this research was exhibited at the Wiesner Gallery at MIT and aims to exist in a fine arts setting, contributing to the lineage of Light &amp; Space artists such as Larry Bell and Robert Irwin.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond-the-Ice: Designing Games for Facilitating Deeper Conversations</title>
<link href="https://hdl.handle.net/1721.1/157712" rel="alternate"/>
<author>
<name>Lee, Cassandra</name>
</author>
<id>https://hdl.handle.net/1721.1/157712</id>
<updated>2024-12-03T04:01:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Beyond-the-Ice: Designing Games for Facilitating Deeper Conversations
Lee, Cassandra
In this age of constant communication, we’ve never been more connected, yet all of our numerous, fast, and convenient connections lack the depth and intimacy we truly crave. The desire for more authentic social experiences necessitates vulnerability, honesty, and risk; but introducing such dynamics presents a great challenge in the context of the wider landscape of public discourse. Designers across disciplines have suggested using games to facilitate stronger social connection, since the structures within games can expose players to alternate social norms and encourage risk-taking. However, few have designed games that specifically foster more intimate forms of dialogue or offer scaffolding for players to see the act of sharing authentically and listening deeply as ways to play. In this thesis, I explore the novel intersection between play, intimate conversation, and technology by presenting a variety of prototypes and fully developed games that employ innovative mechanics designed to facilitate authenticity, vulnerability, complexity, and subjectivity. This work builds on formal knowledge from the social sciences, HCI, and game design, as well as informal knowledge from facilitation, gathering practices, party games, and Tarot, by presenting five distinct design principles aligned with theories grounded in past work: 1) Make emotional disclosure special; 2) Scaffold responsiveness; 3) Approach depth through fun; 4) Empower “the work” through constraints and permissions; 5) Center objects to feel with. Following a thorough Research through Design (RfD) method, I designed 15 unique prototypes and proof-of-concepts which explore various aspects of the five principles. Two of the games were designed, developed, playtested, and evaluated – Analogia, a card game that uses generative images to inspire emotion-rich conversations and Crossroads, a digital game where players are guided to unlock a secret insight by co-creating generative images inspired one another’s real experiences. This work contributes two well-tested games that evoke five compelling principles; a series of mechanics for stimulating dialogue (dual-stimulus, bridge-and-tunnel, image scrying, listener roles); and pilot data from playtests that demonstrate the ability and challenge of these mechanics to create conversational outcomes. Additionally, both spotlighted games creatively employ generative artificial intelligence (AI) to help mediate player interactions through image interpretation and co-creation. Although this is a thesis about conversation games, it critically engages with the current social zeitgeist, provides widely applicable insights and presents nuanced ways to think about the future of social-technical systems that seek to encourage deeper, more authentic ways of connecting.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Perceptual Augmentation</title>
<link href="https://hdl.handle.net/1721.1/157710" rel="alternate"/>
<author>
<name>Chin, Sam</name>
</author>
<id>https://hdl.handle.net/1721.1/157710</id>
<updated>2024-12-03T03:01:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Towards Perceptual Augmentation
Chin, Sam
This thesis explores the concept of perceptual augmentation, focusing on expanding human sensory capabilities beyond their biological limitations. It challenges traditional approaches to sensory enhancement by emphasizing the importance of perception over mere sensory input. Drawing inspiration from the diverse sensory abilities found in nature, the research aims to develop methods for meaningful augmentation of human perception that can impact daily life. The study adopts an ecological approach to perceptual augmentation, grounded in Gibsonian ecological psychology. Key principles include providing correct mental models of augmentation devices, leveraging environmental training and natural tasks, emphasizing multisensory interfaces with sensorimotor feedback, and creating affordances that mimic the natural world. This approach seeks to facilitate perceptual learning through natural interaction with the environment, rather than relying on extensive explicit training.&#13;
The thesis presents early work in exploring and evaluating individual principles of this ecological framework for perceptual augmentation. While acknowledging the gap between the proposed theoretical approach and current research outcomes, the studies conducted focus on augmenting perception for specific tasks such as pitch interval perception, pilot situation awareness, and sleep staging.  The research does not yet demonstrate a generalized, "all-purpose" augmented sense, but lays groundwork for future investigations, including a proposed experiment to mitigate age-related hearing loss using the developed principles.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temporal Telepresence: Immersive Interfaces for TeleAbsence</title>
<link href="https://hdl.handle.net/1721.1/157709" rel="alternate"/>
<author>
<name>Pillis, D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157709</id>
<updated>2024-12-03T03:30:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Temporal Telepresence: Immersive Interfaces for TeleAbsence
Pillis, D.
To store the past in a simulation may enable greater understanding of ourselves, our stories, and our histories. The urge to capture our past into networks of photographic, written, filmed, and object-based narratives has long been a means for individuals to identify change, growth, and gain perspective on themselves. Using a dataset of human narratives derived from records and ephemera, this thesis explores a novel approach to preserving and interacting with memories. We present an interactive system of objects and applications that supports intergenerational memory preservation by enabling individuals to actively explore the relationship between personal artifacts, photographs, the spaces of their past, and their memories. This system integrates personal digital twins, photogrammetry, Gaussian splatting, and tangible interfaces to create a new way of experiencing the past, based on interactivity with architectural artifacts and simulations from an individual’s life. Using an iterative participatory design process, we developed a set of multisensory interaction experiences that allow individuals to explore their relationship to autobiographical memory. The system dynamically links autobiographical memories with the environments where they took place, responding to text, photo, and object-based interactions. This experience invites individuals to modify their recollections by exploring how photo, video, and 3D space relate to the experience of revisiting narratives from the past. Applications of this system include assisting with dementia, aging, memory loss, and Alzheimer’s. Our initial studies were promising. When using the simulation system, individuals spent more time reminiscing, discussing more memories, and experiencing greater presence in their recollections than without the interactive paradigm. The system also encouraged family members to reinforce their memories by actively re-encoding them through the simulation interfaces. Results demonstrated that presence in memories seemed more vivid, detailed, and spatially accurate than before the intervention. The result is a new memory-sharing experience that benefits individuals and families by allowing them to understand how their interactions with the past can be enriched through the integration of artifacts and simulations that impact the development of autobiographical memory.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Risk-Benefit Assessment of Pandemic Virus Identification</title>
<link href="https://hdl.handle.net/1721.1/157708" rel="alternate"/>
<author>
<name>Jeyapragasan, Geetha</name>
</author>
<id>https://hdl.handle.net/1721.1/157708</id>
<updated>2024-12-03T03:36:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Risk-Benefit Assessment of Pandemic Virus Identification
Jeyapragasan, Geetha
Pandemic Virus Identification (PVI) aims to assess unknown viruses for their pandemic potential in immunologically naive human populations. While proponents argue that PVI could facilitate targeted spillover prevention and accelerate medical countermeasure development, critics raise concerns about biosafety and biosecurity risks. This thesis presents a comprehensive mathematical framework to evaluate the benefits, biosafety risks, and biosecurity risks associated with PVI research.&#13;
&#13;
Using a combination of mathematical modeling and expert elicitation, we developed a structured approach to estimate the potential impacts of PVI. Our framework suggests that identifying a single pandemic-capable virus through PVI could potentially save lives by reducing natural pandemic risks. However, this benefit is substantially outweighed by the estimated anthropogenic risks from potential accidental pandemic events and deliberate misuse scenarios. The overall expected value of identifying a single pandemic-capable pathogen was estimated to be strongly negative. &#13;
&#13;
Significant uncertainty exists in many key parameters estimated through surveys, with wide confidence intervals reflecting the lack of consensus among experts. Expert opinions varied considerably on topics such as the likelihood of funding for medical countermeasures and the potential for deliberate misuse of pandemic agents. This modeling work primarily aims to provide exploratory estimates to guide future work. &#13;
&#13;
Our findings underscore the urgent need for improved governance of research involving potential pandemic pathogens. This study provides a quantitative basis for ongoing discussions about the balance between scientific advancement and public safety in high-risk areas of life sciences research.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-bounce Returns for Specular Surface Mapping from Consumer-grade Flash LiDAR</title>
<link href="https://hdl.handle.net/1721.1/157707" rel="alternate"/>
<author>
<name>Lin, Tsung-Han</name>
</author>
<id>https://hdl.handle.net/1721.1/157707</id>
<updated>2024-12-03T03:01:53Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Multi-bounce Returns for Specular Surface Mapping from Consumer-grade Flash LiDAR
Lin, Tsung-Han
This thesis proposes an approach to leverage multi-bounce returns of a flash LiDAR on portable smartphones for 3D specular surface reconstruction. This is an important research problem as most traditional LiDAR systems fail to detect specular surfaces. As mirror and glass are everywhere, vision systems failing to detect specular surface can be detrimental. Applications like mapping may become inaccurate, and more critically, robots could crash into undetected windows during navigation, leading to potentially fatal outcomes. We perceive this work can impactfully enhance the robustness of specular surface detection, with LiDAR complementing any kind of vision system, particularly image-based.&#13;
&#13;
Traditional LiDAR systems typically assume that all returns are single-bounce, which can lead to inaccurate representations of specular surfaces like mirrors or glass, often causing them to appear as though there is a hole. In contrast, this approach models the multi-bounce paths, providing a more accurate reconstruction of these specular surfaces.&#13;
&#13;
We operate with a consumer-grade LiDAR that does not require manual calibration and can be operated in real-time on an affordable and portable smartphone. Consumer-grade LiDAR multi-beam flash LiDAR is challenging with its coarse resolution, co-located sensors, and multiplexing setup. In face of these challenges, we propose to solve the association problem with the `reciprocal pair’ algorithm, which can discern different types of bounces from the multi-bounce returns.&#13;
&#13;
The algorithm is shown to detect over multiple consecutive frames for dense mirror mapping. In addition to 3D reconstruction, we show multi-bounce returns help to enhance performances on applications such as segmentation and novel view synthesis. Our method can be combined with these state-of-the-art learned-based model, enhancing its robustness by discerning ambiguous scenarios. In general, this approach can map various specular surfaces like mirrors and glasses, without making assumption about particular specular surface shapes, and can operate on non-perpendicular specular-diffuse surface pairs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Timbral Transformations</title>
<link href="https://hdl.handle.net/1721.1/157706" rel="alternate"/>
<author>
<name>Shand, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/157706</id>
<updated>2024-12-03T03:23:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Timbral Transformations
Shand, Jessica
From folk songs to festivals, cafes to concert halls, and religious rituals to recording studios, the flute has long had a shapeshifting, cross-cultural presence. This thesis leverages 21stcentury technologies not only to explore and extend the timbral versatility of flutes, but also to underscore the performative, fluid, and ever-evolving nature of timbre more generally. At the core of the project is the creation of sequences of discrete sounds that interpolate between semantic categories and a collection of fixed media compositions based on those sequences, both of which consist entirely of flute sounds that have undergone varying degrees of electronic manipulation. By means of digital signal processing techniques, the flute wavers in and out of a multitude of sonic identities. Sometimes, it masquerades as another familiar object or interface (e.g., a ticking clock) or abstractly evokes a concept or phenomenon (e.g., a storm); at other times, it beckons toward the ethereal or ineffable, resisting indexical identification altogether. With source materials warped, layered, and splayed across the frequency spectrum, such concerns as “the real” and “the true” begin to move out of focus, making way for attention to embodied phenomenological experiences of sound. As this thesis positions compositional practice as a form of research, its outputs range from the conceptual to the creative and the computational. In addition to the music at its core, the project interfaces with gender studies in its original exposition on timbre and timbral identity, includes a rigorous set of experiments with human and machine listeners, and makes original applications of multimodal language models not before seen in musicology or music theory. A live performance incorporating each of these project vectors and an audience discussion following the event offer further opportunities for reflection and critique.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practice of consulting firms in corporate strategic planning.</title>
<link href="https://hdl.handle.net/1721.1/157653" rel="alternate"/>
<author>
<name>Chapman, Beverly Jean.</name>
</author>
<id>https://hdl.handle.net/1721.1/157653</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Practice of consulting firms in corporate strategic planning.
Chapman, Beverly Jean.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Bibliography: leaves 82-84.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Derived distribution of water volume above a given threshold discharge.</title>
<link href="https://hdl.handle.net/1721.1/157652" rel="alternate"/>
<author>
<name>Chan, Siu-On.</name>
</author>
<id>https://hdl.handle.net/1721.1/157652</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Derived distribution of water volume above a given threshold discharge.
Chan, Siu-On.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Bibliography : leaves 138-139.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wear studies of abrasive particles</title>
<link href="https://hdl.handle.net/1721.1/157640" rel="alternate"/>
<author>
<name>Distel, Joseph William.</name>
</author>
<id>https://hdl.handle.net/1721.1/157640</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Wear studies of abrasive particles
Distel, Joseph William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1956; Bibliography: leaf 50.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wear studies of single aluminum oxide grains during grinding</title>
<link href="https://hdl.handle.net/1721.1/157639" rel="alternate"/>
<author>
<name>Cole, John M.
            (John Martin)</name>
</author>
<id>https://hdl.handle.net/1721.1/157639</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1955-01-01T00:00:00Z</published>
<summary type="text">Wear studies of single aluminum oxide grains during grinding
Cole, John M.
            (John Martin)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1955; Includes bibliographical references (leaf 48).
</summary>
<dc:date>1955-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forces in internal grinding</title>
<link href="https://hdl.handle.net/1721.1/157638" rel="alternate"/>
<author>
<name>Reichenbach, George S.
            (George Sheridan)</name>
</author>
<id>https://hdl.handle.net/1721.1/157638</id>
<updated>2024-11-22T03:52:12Z</updated>
<published>1952-01-01T00:00:00Z</published>
<summary type="text">Forces in internal grinding
Reichenbach, George S.
            (George Sheridan)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1952; Includes bibliographical references (leaves 28-29).
</summary>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The mechanics of dry surface grinding</title>
<link href="https://hdl.handle.net/1721.1/157637" rel="alternate"/>
<author>
<name>Marshall, Earle Robert,
            1919-</name>
</author>
<id>https://hdl.handle.net/1721.1/157637</id>
<updated>2024-11-22T03:42:16Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">The mechanics of dry surface grinding
Marshall, Earle Robert,
            1919-
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1949
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of adhesion mechanisms</title>
<link href="https://hdl.handle.net/1721.1/157630" rel="alternate"/>
<author>
<name>Yee, Geary Yee.</name>
</author>
<id>https://hdl.handle.net/1721.1/157630</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">An investigation of adhesion mechanisms
Yee, Geary Yee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3-D Topology Optimization of Spatially Averaged Surface-Enhanced Raman Devices</title>
<link href="https://hdl.handle.net/1721.1/157601" rel="alternate"/>
<author>
<name>Hammond, Ian M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157601</id>
<updated>2024-11-19T03:16:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">3-D Topology Optimization of Spatially Averaged Surface-Enhanced Raman Devices
Hammond, Ian M.
Numerous nanophotonics applications necessitate designs that enhance distributed incoherent emission. Representative applications include light-emitting diodes, thermal emitters, and Raman sensing. Previous efforts in full-scale topology optimization for Surface Enhanced Raman Sensing (SERS) have predominantly focused on single particle emissions or two-dimensional systems, which are impractical for actual fabrication. An objective function represented by ട|E|⁴dV effectively approximates Raman enhancement. This function tends to diverge near sharp tips and other singular geometries in three-dimensional spaces for relevent materials. This thesis delves into methodologies for regularizing the optimization process to preclude the formation of such problematic geometries. Additionally, it integrates lithography constraints to ensure that the optimized SERS substrates are viable for fabrication. To align with computational limits, various strategies are employed to make the system manageable. The techniques developed in this study facilitate the practical design of 3-D systems that enhance incoherent emission through topology optimization.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Language Control for for Visually Interactive Decision Support Tools in Supply Chain Management</title>
<link href="https://hdl.handle.net/1721.1/157594" rel="alternate"/>
<author>
<name>Guter, Willem J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157594</id>
<updated>2024-11-19T03:55:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Natural Language Control for for Visually Interactive Decision Support Tools in Supply Chain Management
Guter, Willem J.
Supply chains are complex networks where changing one variable can have unforeseen&#13;
effects on the entire chain. Interactive supply chain visualizations are useful for understanding these effects, and can lead to decreased cost. However, these interactive visualizations&#13;
can require technical and domain expertise to operate and understand. A solution for this&#13;
is natural language interfaces, allowing users to use natural language commands to control&#13;
the visualization. Additionally, natural language interfaces can be difficult to implement,&#13;
and require applications specific programming or training. This thesis proposes integrating&#13;
a pre-trained large language model as the natural language interface. An example application is created using an existing supply chain network visualization application. Various&#13;
large language models are then evaluated for usability, functionality, and accuracy. We find&#13;
that a state of the art commercial model is able to practically fulfill the role of a natural&#13;
language interface, but that open-source large language models are not currently capable of&#13;
functioning in this way.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Fine-Tuning of Language Models for Multiple-Choice Questions</title>
<link href="https://hdl.handle.net/1721.1/157591" rel="alternate"/>
<author>
<name>Wang, Ivy A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157591</id>
<updated>2024-11-19T03:46:08Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Investigating Fine-Tuning of Language Models for Multiple-Choice Questions
Wang, Ivy A.
This thesis investigates the positional and contextual bias of large language models (LLMs) when used to answer multiple-choice questions (MCQs). Given the increasing use of generative language models in fields ranging from cybersecurity to biomedical research, it is important to understand the causes of their behavior in order to mitigate biases and prevent errors. One known method of improving the performance of LLMs is fine-tuning, wherein a model is additionally trained on data from a specified distribution or subject area. We specifically investigate training data properties related to positional bias in fine-tuned language model performance on correctly answering MCQs. To improve model efficiency, we used parameter-efficient fine-tuning, specifically LoRA (Low-Rank Adaptation), which reduces the dimensionality of weight matrices used in the model’s layers. We verify that if the training data for the model possesses the same qualities and distributions as the test data, the LLM will achieve the best performance. In our experiments, we scaled and balanced our fine-tuning datasets and learned that both processes improve the accuracy on test sets of MCQs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Radiation Shielding Design and Radioactive Waste Assessment of Horizontal Compact High Temperature Gas-Cooled Reactor</title>
<link href="https://hdl.handle.net/1721.1/157583" rel="alternate"/>
<author>
<name>Kudriavtseva, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/157583</id>
<updated>2024-11-19T03:57:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Radiation Shielding Design and Radioactive Waste Assessment of Horizontal Compact High Temperature Gas-Cooled Reactor
Kudriavtseva, Anna
With the objective that nuclear power plants utilizing small High Temperature Gas-Cooled Reactors (HTGRs) can provide economic, environmentally favorable and reliable electricity and heat for community and industrial purposes, Boston Atomics LLC initiated the design of Horizontal Compact HTGR (HC-HTGR). This work addresses shielding, activation analysis and the decommissioning cost assessment as an integrated part of the design process.&#13;
Reinforced regular and borated concrete were considered as shielding materials for the reactor building and Reactor Cavity Cooling System (RCCS) tanks. It was found that for locations of the reactor building where the dose rates during normal operation were greater than the Nuclear Regulatory Commission (NRC) limit of 0.1 rem/hr, 175 cm of borated concrete is required. The shielding concerns motivated the decision to separate RCCS tanks from the reactor room with a 75 cm borated concrete wall to ensure that the radiation levels do not exceed the NRC limit. Additionally, several shielding options were proposed to protect steam generator modules from radiation-induced activation.&#13;
The activation analysis was performed for the key equipment and graphite reflector components of the HC-HTGR design. The core barrel made of Incoloy 800H was characterized as a class C waste component after 40 years of reactor operation. It was proposed that 2.25Cr-1Mo alloy be considered as barrel material to decrease activity levels. The reactor pressure vessel (RPV) and RCCS tubes made of carbon steel were characterized as a class A waste component. The graphite reflector components are characterized as Class C level waste.&#13;
Furthermore, this work discusses the neutron irradiation effects and their impact on the integrity of the barrel, RPV, and graphite reflector against material property changes. It was found that 2.25Cr-1Mo alloy has a higher radiation resistance due to the higher iron content in the composition. Based on the results, the reactor vessel is safe from radiation damage for 32 years of operation. The data evaluated for the graphite reflectors indicate that the components should be replaced after 20 years before they pass the turnaround point. &#13;
The concentrations of radionuclides computed during activation analysis were used to predict the radiation levels from beta and gamma sources that could be encountered during the disposal of the core barrel and RPV. Based on the obtained data, it is clear that if the barrel is not replaced during operation, the radiation dose rate will remain above acceptable levels, requiring a more rigorous disposal approach. The radiation levels are reduced for the reactor vessel as it was exposed to a lower flux and radiation-induced activation. A similar analysis was performed to derive the exposure dose rate from gamma and beta rays that can be detected by a sensor of a refueling camera. Beta particles will deposit most of the energy in a graphite layer, and the camera will register negligible dose rates. The gamma ray estimates indicate that a more enduring refueling machine is required. &#13;
The results of this work provide the disposal costs for HC-HTGR immediate dismantlement and after a given decay period. Overall, the disposal costs of core barrel, RPV and graphite reflector are $13 million for HC-HTGR design after 40 years of full operation if the billable charge limits are set on radioactivity levels. If this option is not considered, the total disposal costs grow up to $225 million. However, extending the storage up to 10 years would decrease the activity, reducing the cost of disposal.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systematic Studies on the Chelating Ligand Effects&#13;
of Novel Borafluoronium Ions</title>
<link href="https://hdl.handle.net/1721.1/157568" rel="alternate"/>
<author>
<name>Allen, Marissa D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157568</id>
<updated>2024-11-19T04:14:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Systematic Studies on the Chelating Ligand Effects&#13;
of Novel Borafluoronium Ions
Allen, Marissa D.
This study explores the synthesis and characterization of borafluoronium ions via a ligand-based strategy using bidentate amine and phosphine bases as chelating agents to cationic boronium ions.The borafluoronium complexes A–C were synthesized in high yields (80%–95%) and characterized using NMR spectroscopy and single crystal X-ray diffraction. Further investigations into the coordination of other bisphosphine ligands, such as dppe, rac-BINAP, and Xantphos, resulted in the formation of Lewis adducts rather than the desired borafluoronium ions. The challenges in isolating these species are attributed to steric and chelate effects inherent of the ligands, with NMR analysis providing insights into the coordination chemistry and stability of these complexes.This work advances the understanding of borafluoroniumion formation and the impact of ligand structure on their properties.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization under ecological realism reproduces signatures of human speech perception</title>
<link href="https://hdl.handle.net/1721.1/157565" rel="alternate"/>
<author>
<name>Magaro, Annika K.</name>
</author>
<id>https://hdl.handle.net/1721.1/157565</id>
<updated>2024-11-19T03:12:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Optimization under ecological realism reproduces signatures of human speech perception
Magaro, Annika K.
Recent advances in machine learning have made real-world perception tasks feasible for computers, in many cases approaching levels of performance similar to those of humans. In particular, optimizing models for ecologically realistic training datasets has helped to yield more human-like model results. In the field of speech recognition, models trained under realistic conditions with simulated cochlear input reproduce some characteristics of human speech recognition. However, it is unclear how similar the behavior of these models is to that of humans across the many ways in which speech can be manipulated or degraded, since human and model behavior have not been extensively compared. In this paper, we address this question by comprehensively testing a neural network model trained in ecological conditions across a large set of speech manipulations, comparing its behavior to that of humans. We find that training in ecological conditions yields a fairly good overall match to human behavior, with some discrepancies that can be largely resolved by training specifically on these conditions. The results support the idea that the phenotype of human speech recognition can be understood as a consequence of having been optimized for the problem of speech recognition in natural conditions.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of competition in freight transportation to and from Boston, Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/157488" rel="alternate"/>
<author>
<name>Luykx, H. M. C.</name>
</author>
<author>
<name>McHugh, G. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157488</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1931-01-01T00:00:00Z</published>
<summary type="text">A study of competition in freight transportation to and from Boston, Massachusetts
Luykx, H. M. C.; McHugh, G. E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1931; Appendix contains numerous pamphlets.
</summary>
<dc:date>1931-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The use of discriminators in the linear detection of F-M signals</title>
<link href="https://hdl.handle.net/1721.1/157485" rel="alternate"/>
<author>
<name>Lu, Pao-Wei.</name>
</author>
<id>https://hdl.handle.net/1721.1/157485</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1944-01-01T00:00:00Z</published>
<summary type="text">The use of discriminators in the linear detection of F-M signals
Lu, Pao-Wei.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1944; Includes bibliographical references (leaf 51).
</summary>
<dc:date>1944-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Valuation model for Less Developed Countries' Debt in the secondary market</title>
<link href="https://hdl.handle.net/1721.1/157480" rel="alternate"/>
<author>
<name>Carballo, Carlos Federico.</name>
</author>
<id>https://hdl.handle.net/1721.1/157480</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1989-01-01T00:00:00Z</published>
<summary type="text">Valuation model for Less Developed Countries' Debt in the secondary market
Carballo, Carlos Federico.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1989; Includes bibliographical references (leaves 75-79).
</summary>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Urban space heating with a heat pump-condenser temperature water system</title>
<link href="https://hdl.handle.net/1721.1/157478" rel="alternate"/>
<author>
<name>Yee, Wee Tong.</name>
</author>
<id>https://hdl.handle.net/1721.1/157478</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Urban space heating with a heat pump-condenser temperature water system
Yee, Wee Tong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Average frequency trajectory control : normal mode.</title>
<link href="https://hdl.handle.net/1721.1/157475" rel="alternate"/>
<author>
<name>Yared, Khaled Ibrahim.</name>
</author>
<id>https://hdl.handle.net/1721.1/157475</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Average frequency trajectory control : normal mode.
Yared, Khaled Ibrahim.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1976; Includes bibliographical references.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The implementation of a joint disaggregate demand model in an urban simulation</title>
<link href="https://hdl.handle.net/1721.1/157474" rel="alternate"/>
<author>
<name>Worms, Vincent Robert.</name>
</author>
<id>https://hdl.handle.net/1721.1/157474</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">The implementation of a joint disaggregate demand model in an urban simulation
Worms, Vincent Robert.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1976; Bibliography: leaves 114-115.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Echoes From the Stone Reframing Preservation in Syria Through Haurani Folklore</title>
<link href="https://hdl.handle.net/1721.1/157368" rel="alternate"/>
<author>
<name>Alrifai, Hajar</name>
</author>
<id>https://hdl.handle.net/1721.1/157368</id>
<updated>2024-10-17T03:12:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Echoes From the Stone Reframing Preservation in Syria Through Haurani Folklore
Alrifai, Hajar
Partially buried in the landscape of Hauran in southern Syria, my family’s 1500-year-old house, Alali—formerly a Byzantine church—further erodes with each passing year. Throughout the decades, the house has been subjected to various forms of destruction: from development, demolition, and rocket strikes to violent reconstruction. Its crumbling stones are laden with the memories of four generations and echo with a way of life that is disappearing. At the heart of Hauran are the fellahin, farmers who permanently settled in its villages in the late 19th century. As they settled, the fellahin reclaimed, inhabited, dismantled, and rebuilt the Byzantine structures, often rearranging or reimagining the original programs: chapels, houses, and cemeteries. In my family’s border village of Nasib—a place both liminal and at the margin—this rich local history lives not in formal archives but in scattered material like architectural ruins, oral poems, folk songs, diasporic transcripts, and 8mm video cassettes, many of which resonate as sonic artifacts. What began as a project of documenting the decay of our old house evolved into a meditation and manifesto on preservation outside the purview of top-down institutions. Through creative writing and cinematic intervention, Echoes from the Stone asks: what does it mean to preserve a place, and preservation for whom? In this proposed paradigm, ‘story’ becomes integral to architectural preservation. This story of Alali interweaves my journal entries with the encounters of my greatgreat grandfather, Hassan Ali, an oral poet who founded the village. I further draw from my grandfather Faisal’s diaries, our family’s archival videos, and interviews with Nasib’s elders, including my grandmother Um Ghazi, an olive farmer, and Um Saado, a Bedouin matriarch and shepherd who once lived in the old home with her family. By foraging for this counter-archive of living memories, I reveal intergenerational intersections which complicate and reimbue the colonial history of the village—and of Syria—with voices that echo from the stone, voices that persist and whisper from the ground, from across borders and oceans, and from within. This interdisciplinary chronicle draws from architecture, agriculture, literature, anthropology, and film, to reconstruct a social history of the village and speculate on alternate ways of dwelling, building, and preserving— reclaiming the archive, reinserting narrative, and reframing heritage through the folklore of Hauran.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wave Mechanics in Constructed Oyster Reefs and the Design of Nature-Based Coastal Adaptation</title>
<link href="https://hdl.handle.net/1721.1/157367" rel="alternate"/>
<author>
<name>Brice, James Vincent</name>
</author>
<id>https://hdl.handle.net/1721.1/157367</id>
<updated>2024-10-17T03:22:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Wave Mechanics in Constructed Oyster Reefs and the Design of Nature-Based Coastal Adaptation
Brice, James Vincent
There has been great interest in the potential of constructed oyster reefs (CORs) to function as nature-based coastal protection infrastructure, but most projects to-date are designed primarily for wave attenuation and fail to consider both the environmental conditions necessary for long-term oyster reef sustainability as well as the importance of education and outreach in fostering environmental stewardship. Realizing the promise of nature-based coastal adaptation means building physical, ecological and social infrastructure simultaneously, requiring a design-research methodology that combines an understanding of biological design constraints, physical analysis and community engagement. &#13;
&#13;
Physical and numerical wave flume experiments were conducted to investigate mechanisms of wave energy loss in oyster shell gabion-type CORs that place oyster biology in the foreground— particularly, the influence of across-shore width, spacing and structure porosity on wave attenuation under non-breaking wave conditions. Gabion widths of O(1) wavelength were found to attenuate waves by 40%. These losses were driven primarily by internal drag which was characterized experimentally and accurately modeled with the modified Ergun Equations and the waves2Foam library of the open-source CFD software OpenFOAM. &#13;
&#13;
This research was then translated into a suite of interactive design activities, featuring a tabletop wave flume, scale models of coastal features, and a set of coastal community member cards. Through design and creative inquiry, these tools seek to communicate complex biophysical processes in coastal ecosystems while empowering communities to reimagine what it really means to "build with nature".
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When the Earth Breathes: An Anthology of Volcanic Urbanism</title>
<link href="https://hdl.handle.net/1721.1/157366" rel="alternate"/>
<author>
<name>Carucci Alvarez, Maria Gabriela</name>
</author>
<id>https://hdl.handle.net/1721.1/157366</id>
<updated>2024-10-17T04:01:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">When the Earth Breathes: An Anthology of Volcanic Urbanism
Carucci Alvarez, Maria Gabriela
Malpaís. A Spanish word used in volcanically-active landscapes to refer to the new basalt terrain that solidifies after an eruption. It translates literally to “bad country”, and it is defined as a “sterile, arid surface”. This thesis looks at the Tajogaite volcano, the most recent eruption in La Palma, one of the youngest of eight islands in the oceanic volcanic arc formation of the Canary Islands. It positions this event not as a unique site but as a manifestation of a network of bureaucratic colonial imaginaries that still operate within a disaster relief framework that exists in volcanic landscapes throughout the world. Together, these imaginaries draw an unyielding binary narrative about volcanoes as purely destructive entities, and further dismiss the porosity that exists between the geos, the bios and the polis. Igneous landscapes, through the production of new basalt floors, rich soils and ocean intrusions, traverse and redefine property boundary lines and national coastlines, which extends beyond plan views and into sectional shifts. This project aspires to spatialize the temporal moments of one volcanic eruption, questioning, ultimately, how the ownership of materials in flux, along with their transformations, can reframe our imagination of a city-volcano production that frames both as ephemeral, ever changing entities. Through ten allegories, cities are positioned inside of the geological realm, and are de-centered to contextualize them within a volcano’s lifespan. The first five stories describe the current framework, while the other half become allegories through which architecture and urbanism are leveraged as tools through which to understand the earth’s movements at different scales, temperatures and states of matter, in order to provide an alternative imaginary to current answers to the question of volcanic urbanism.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven Home Workspace Design: Interactive DIY Platform Mediating the User and Expert Literature</title>
<link href="https://hdl.handle.net/1721.1/157365" rel="alternate"/>
<author>
<name>Yi, Wangli</name>
</author>
<id>https://hdl.handle.net/1721.1/157365</id>
<updated>2024-10-17T03:12:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Data-driven Home Workspace Design: Interactive DIY Platform Mediating the User and Expert Literature
Yi, Wangli
After COVID-19, some employees have opted to continue working from home (WFH) or have chosen a hybrid working mode. Previous research has shown that satisfaction with the physical environment and characteristics of home workspaces are directly related to mental health, which can affect productivity and well-being. This underscores the need for better designed WFH environments. This study explores the use of data-driven tools in interior design to enhance WFH setups. It posits that these tools can transcend traditional design limitations by incorporating professional expertise and facilitating user- driven design processes.&#13;
The tool's backend is built on a comprehensive collection and classification of research literature on WFH environments, creating an interactive platform where users can engage directly in the design process. This is achieved through real-time, machine-mediated suggestions that enhance well-being without the need for professional human designers. Employing a user-centered design framework, the study develops and tests a prototype to assess its effectiveness in empowering users to intentionally and sensitively redesign their home workspaces.&#13;
Results show that participating graduate students became more aware of their WFH environment during the design process, but largely it did not change their existing workspace decisions. This observation indicates the potential benefit of this interactive machine-mediated system as a design education tool. Further test on other demographic groups, such as those who need to focus for long hours professionally at home as well as those who are specifically concerned with mental health issues, is anticipated as the next step for the evaluation of this platform.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Waste to Structure: A Deep Reinforcement Learning Approach to Circular Design</title>
<link href="https://hdl.handle.net/1721.1/157362" rel="alternate"/>
<author>
<name>Sørensen, Karl-Johan I.</name>
</author>
<id>https://hdl.handle.net/1721.1/157362</id>
<updated>2024-10-17T03:51:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Waste to Structure: A Deep Reinforcement Learning Approach to Circular Design
Sørensen, Karl-Johan I.
The design-to-construction process of buildings predominantly follows a top-down linear workflow, where a design is drawn and subsequently refined to determine the required materials and components. This approach assumes an infinite material supply or the capability to manufacture what is needed for the design. Constructing in this manner is resource-intensive and wasteful, making it incompatible with our global climate goals. One way to significantly reduce our material and environmental footprint is by extending the lifespan of building materials through circular design practices. In this approach, the available materials define the architecture, inverting the process from top-down to bottom-up. This method, known as Inventory-Constrained Design, enables the creation of new buildings using materials sourced from construction and demolition waste streams. These inventories, characterized by their non-standard and uniquely varied elements, are hard to design with due to the enormous quantity of possible combinations of even a few discrete elements. Identifying a feasible design that aligns with the designer's intent and meets functional requirements becomes an overwhelmingly time-consuming task, heavily reliant on manual trial and error. Computational optimization has been implemented to automate the process, but state-of-the-art algorithms still require manually pre-defining a parametric target design-space or take too long to compute when applied to larger problems.&#13;
&#13;
This thesis proposes a new method for circular design utilizing Deep Reinforcement Learning (RL) to design structures, requiring only a design gesture and the inventory as input. It works by training an artificial neural network to sequentially assemble a structure from inventory elements, following the gesture while meeting a structural goal. Hence, the design layout directly arises from available inventory. After training, the neural net can be employed instantaneously to design new structures with new inventories without any significant computational expense. To evaluate the effectiveness of the RL method, it is applied to the specific problem of inventory-constrained design of planar roof trusses and demonstrated in a realistic example of assembling a long-span roof from a disassembled transmission tower.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exœrcising a Haunted City</title>
<link href="https://hdl.handle.net/1721.1/157361" rel="alternate"/>
<author>
<name>Wong, Bryan Hon Ting</name>
</author>
<id>https://hdl.handle.net/1721.1/157361</id>
<updated>2024-10-17T04:04:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exœrcising a Haunted City
Wong, Bryan Hon Ting
With the looming threat of cultural erasure posed by Hong Kong’s repatriation to China no later than 2047, rituals emerge as the last resource sustaining the collective identity of the city. This thesis documents, through the study of local Taoist-Buddhist practices, the choreographies of rituals as a reparative tool to resist the disap- pearance of local culture. It is linked to findings from everyday domestic offerings to ancestors, annual festive performances of traumatic cleansing, and the booming clientele businesses of precautionary rites, all of which demonstrate their spatial and temporal qualities as methods to resist modern state control.&#13;
&#13;
To retain the residue of pre-modern practices as a critique of socio-political turmoil, this thesis suggests an alternative design that preserves and promotes the annual ghost festival for public participation. By revising the festival’s pilgrimage route and ritual sheds, this thesis transforms the traditional nature of ephemeral scaffold- ings into permanent poles and follies. Situated along the city’s most haunted public estate, these structures are programmed as public facilities for fitness training and children’s playscapes. During the festival, they will be activated into ritual sheds, demonstrating a formal and functional contrast between the everyday and the ritu- al—from form to formlessness, exposure to closure, and lightness to heaviness.&#13;
&#13;
Designed to evade institutional surveillance, these clandestine transformations preserve solidarity and identity not by emphasizing the significance of priests exorcising in rituals, but by highlighting the quotidian motor memories developed from locals exercising within. The duality of ritual and everyday movements shall exercise the ghosts of a haunted city.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Common Grounds in Shared Waters Integrated Design for Negotiating Equitable Development in Gosabara-Mokarsagar</title>
<link href="https://hdl.handle.net/1721.1/157360" rel="alternate"/>
<author>
<name>Mehta, Dhwani</name>
</author>
<id>https://hdl.handle.net/1721.1/157360</id>
<updated>2024-10-17T03:39:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Common Grounds in Shared Waters Integrated Design for Negotiating Equitable Development in Gosabara-Mokarsagar
Mehta, Dhwani
Along the west coast of India, in the waters of Gosabara-Mokarsagar, conflicting visions for the landscape mix and muddle. In 2016, Muslim fisherfolk of Gosabara, 100 families, already marginalized by religious, caste, and class distinctions, were banned from fishing, which was their sole traditional livelihood due to environmental protection claims. This led the community to file a petition for mass euthanasia to protest the loss of their rights. Despite their protests, the Government of India announced the Kerly Recharge Reservoir Ecotourism project in 2022 that overlooked their needs, threatened their cultural identity linked to fishing, and exacerbated their traumatic history of displacement that dates back to India and Pakistan’s 1947 partition. &#13;
&#13;
Although many groups’ contested visions map onto the shared waters of Gosabara-Mokarsagar, the fisherfolk are particularly excluded from decision-making processes. Finding a singular common ground among the contesting groups is challenging due to vast differences in power, position, and privilege. This thesis, therefore, aims to ensure equitable representation for all stakeholders, particularly disempowered fisherfolk, by  an integrative design approach of forging a network of multiple ‘common grounds.’ The term ‘common grounds’ defines partnerships of two or three stakeholders, instead of all, based on mutual understanding and shared objectives like sustainable livelihoods, economic development, ecotourism, and avian conservation. &#13;
&#13;
First, I established a common ground with a local NGO, Mokarsagar Wetland Conservation Committee, by using photography, videography, and drawings to raise public awareness about this unique landscape. Initially intuitive and later strategic, I represented the lush waters as a shared home for both the fisherfolk and the birds. Second, I present a network of localized design strategies to enable partnerships that position the NGO as a mediator between the government and local communities, especially the fisherfolk, enabling it to foster alternative models of environmental stewardship. Through these partnerships, rooted in figurative ‘common grounds,’ the fisherfolk become primary, active collaborators in development processes. This thesis creates the conditions for a more equitable development model for this landscape by using design to enable grassroots partnerships that integrate communities into ecological conservation and economic growth projects.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Office of Back of House</title>
<link href="https://hdl.handle.net/1721.1/157359" rel="alternate"/>
<author>
<name>Bilal, Ekin</name>
</author>
<id>https://hdl.handle.net/1721.1/157359</id>
<updated>2024-10-17T03:38:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Office of Back of House
Bilal, Ekin
Office of Back of House (OoBoH, pronounced “ooh-boo”), is an architectural practice that operates at the intersection of ducts, conduits, scaffolding, custodial carts, mechanical rooms and sheds. OoBoH conducts design experiments in and around these maintenance objects and spaces typically separated from “architecture-proper.” By looking at the regulations, funding initiatives, zoning amendments and energy consumption routines that rule these spaces, OoBoH questions the boundaries that separate them from the “front of house” to begin with.&#13;
These “back of house” spaces exist right inside the thick poché line that bounds what is thought to be the domain of design. Back of house (BoH) is dictated by an obscured regime of maintenance processes, and by leveraging these currently unexamined spaces, OoBoH believes that they can become the site for tactical design interventions and new visions of maintenance culture. OoBoH is an attempt at entering architecture from the back door, re-characterizing existing buildings as dependent on the spaces and labor often hidden behind&#13;
pastiche and façade.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tectonics of the semi-permanent: Reassembling fit-out architecture</title>
<link href="https://hdl.handle.net/1721.1/157358" rel="alternate"/>
<author>
<name>Schnitzler, Jenna</name>
</author>
<id>https://hdl.handle.net/1721.1/157358</id>
<updated>2024-10-17T04:14:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tectonics of the semi-permanent: Reassembling fit-out architecture
Schnitzler, Jenna
In New York engineer Reginald Pelham Bolton’s 1911 obsolescence study “Building for Profit: Principles Governing the&#13;
Economic Improvement of Real Estate”, he foretold a truth that remains today, that “the useful or economic existence of&#13;
all classes of buildings, in the rapid march of modern conditions, is constantly shortening” (Bolton, 68). He details how&#13;
the parts of buildings lose value at different rates—as they physically deteriorate, materials wear and things fall out of style,&#13;
but even more quickly, he notes, do our structures become economically obsolete. Then and still today the durability of&#13;
building materials is the least of our concerns when considering functional obsolescence. The physical is almost certain to&#13;
exceed the economic durability of a building as a whole.&#13;
Designers and developers recognize this gap between physical and economic obsolescence, and in response have called&#13;
for a moratorium on new construction—opting instead to convert existing structures to meet changing programmatic&#13;
demands. Yet in these conversions, we use the same extractive methods as new construction, filling existing frames and&#13;
envelopes with non-structural light framing to differentiate the space inside. In this paradigm, to build inside an existing&#13;
frame still relies first on the tool of demolition.&#13;
The uneven wearing that Bolton wrote about in 1911, appears again in the iconic shearing layers diagram from Frank&#13;
Duffy and Stewart Brand, who make a very similar economic argument, demonstrating that the economically fast-wearing&#13;
interior layer accumulates the most investment over time, rebuilt on a cycle of every 5-10 years. We are facing a turning&#13;
point in building; as of 2020, over 35% of total construction activity is renovation work, and we are making increasingly&#13;
rapid changes to building function. This creates a paradigm of fit out architecture that answers unpredictability and&#13;
shifting values with indeterminacy, perpetuating a cycle of repetitive building. This project takes the converted structure&#13;
as its starting point, experimenting with disassembly, reassembly, and the boundaries between fit out and frame, sited&#13;
within a larger material and economic framework that expands the definition of “value” beyond the monetary to include&#13;
material resources embodied by a given structure.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Engineering Design for Reusable Concrete&#13;
Building Structures</title>
<link href="https://hdl.handle.net/1721.1/157357" rel="alternate"/>
<author>
<name>Wongsittikan, Pitipat</name>
</author>
<id>https://hdl.handle.net/1721.1/157357</id>
<updated>2024-10-17T04:15:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automated Engineering Design for Reusable Concrete&#13;
Building Structures
Wongsittikan, Pitipat
Concrete contributes to 8% of global CO2 emission through reinforced concrete (RC) structural system. Unlike steel and timber structures, RC components are rarely reused due to the inseparable phase between concrete and steel. This results in down cycling of the components into aggregates or landfill material. The Pixelframe structural system [1] was proposed to facilitate the reusability of concrete components by implementing the existing external post-tensioning system in bridge structures and fiber reinforced system to design building beams and columns. This work presents an automated engineering design workflow for Pixelframe, including a engineering mechanics of the system that conforms to ACI 318- 19 [2] and fib Model Code 2010 [3], half-scale tests to verify the preliminary behavior of the system, and a scalable design algorithm for minimum embodied carbon designs. The workflow also uncovers new insights on choosing ranges of concrete strengths based on the element lengths and potential carbon reduction from refining the number of different concrete strengths in a building. This work demonstrates the utilization of existing building systems in the context of reusability and the potential of automated computational structures in aiding the design decisions to facilitate the circular economy of concrete structures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Salt to Scale: The Seasoning of Buildings</title>
<link href="https://hdl.handle.net/1721.1/157356" rel="alternate"/>
<author>
<name>Battikha, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/157356</id>
<updated>2024-10-17T03:02:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Salt to Scale: The Seasoning of Buildings
Battikha, Christina
We exist in thick layers of ancient minerals and material formations that perform to shape human architectural practices. Yet, with a continuous desire to force materials into designs, humanity has never ceased to disregard the active strength of a material to perform with time. The next twenty years align with a future of salt in the form of a dynamic, preservative, and corrosive mineral that shall never expire from Earth’s crust. Nevertheless, aspiring to mine, build, maintain, and preserve, humanity remains in constant search of other more durable materials designed with the presumption to last forever.&#13;
&#13;
Salt is certainly not the neutral product of a chemical reaction. It actively performs to preserve, corrode, accumulate, or maintain humanity’s creations. Embracing its ability to expand and reduce timescales, I investigate salt as a material that provides both corrosive and preservative properties offering current architectural practices the choice and responsibility of building for eternity or for a finite moment.&#13;
&#13;
I explore ancient salt cycles shaping the last human activities remaining on the Eastern coast of the Mediterranean, in Anfeh, Lebanon. Molded into a series of geo-cultural objects, salt containers embrace their materiality and escape the dullness of a mold to acknowledge the continuous cultural cycles that exists between time, salt, and its people.&#13;
&#13;
This thesis invites current design and construction practices to think across new intervals of time that reflect the building and un-building capacities of salt as a scalable mineral contributing to a salty architectural ritual that passes from generation to the next; a source of luck amidst a time of ongoing crisis. Providing recipes from a salty kitchen, the work integrates seasonal practices to mine and craft salt into animate typologies embracing the forces of salt to challenge the standard architectural practice against one that thinks with the durations of salt.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>In Tension: Computational exploration of the design space of tensile network structures</title>
<link href="https://hdl.handle.net/1721.1/157354" rel="alternate"/>
<author>
<name>Burke, Adam T.</name>
</author>
<id>https://hdl.handle.net/1721.1/157354</id>
<updated>2024-10-17T03:43:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">In Tension: Computational exploration of the design space of tensile network structures
Burke, Adam T.
Cable and rope net structures are lightweight tensile systems and generally cannot resist&#13;
compression or bending. Tensile network structures are often used to span long distances&#13;
without intermediate supports and have found applications in art, architecture, and structural engineering due to their physical and visual lightness. However, the design of tensile&#13;
net structures is generally challenging since their form cannot be arbitrarily defined. Instead&#13;
a process of form-finding must be used to establish a geometry where all edges of the network&#13;
carry only tensile forces.&#13;
Physical models and computational methods can be used for the form-finding of tensile&#13;
network structures; however the primary challenge in the design process is the adjustment of&#13;
the network parameters to achieve a specific design. Recent work has shown that automatic&#13;
differentiation software packages can be used to efficiently design funicular structures (that&#13;
is, those that work in pure tension or pure compression) with additional designer driven&#13;
objectives, but these techniques remain largely inaccessible to general designers, architects,&#13;
and engineers due to the involved process of problem setup and limited interactivity of&#13;
existing tools.&#13;
To address this limitation, I introduce a new tool set consisting of two main components, Ariadne and Theseus. These components take advantage of automatic differentiation&#13;
of objective functions for efficient tensile network simulation and provide a user interface&#13;
for architects, engineers, and other designers as a plugin for a commonly used 3d modeling&#13;
software. In this thesis, I outline the structure and features of this tool set, show results of&#13;
networks optimized with different composable objectives, and show some fabricated examples. Next, I explore the the generation of more complex 3d network topologies through a&#13;
procedural shape grammar. Finally, I explore the use of differentiable simulation in conjunction with machine learning techniques to optimize the geometry of tensile networks using&#13;
semantic input and to develop an implicit representation of the space of equal edge length&#13;
tensed network poses. Together, this new tool set and additional methods enable a more expansive exploration of the design space of tensile networks where design intent and practical&#13;
constraints are respected.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing frameworks for an equitable future: from building decarbonization to generative modeling.</title>
<link href="https://hdl.handle.net/1721.1/157353" rel="alternate"/>
<author>
<name>De Simone, Zoe</name>
</author>
<id>https://hdl.handle.net/1721.1/157353</id>
<updated>2024-10-17T03:20:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing frameworks for an equitable future: from building decarbonization to generative modeling.
De Simone, Zoe
In this thesis I develop computational frameworks to understand equity under two perspectives: building decarbonization policy and generative modeling.&#13;
&#13;
Part 1 - Equitable building decarbonization&#13;
Buildings significantly contribute to global carbon emissions, necessitating urgent decarbonization to meet 2050 climate targets. The U.S. strives for net-zero emissions by 2050, supported by federal incentives promoting building upgrades. However, financing deep retrofits for all U.S. homes exceeds available public funds. This chapter proposes a model that examines long-term carbon reduction trajectories under various incentive policies, focusing on fairness and equity. Using Oshkosh, WI, as a case study, it explores the philosophical, economic, political, and mathematical dimensions of creating just and effective decarbonization policies that ensure healthy, low-carbon homes for all.   &#13;
&#13;
Part 2 - Equitable diffusion models&#13;
Generative Text-to-Image (TTI) models, while capable of producing high-quality images, often replicate training data biases. Traditional fairness views in machine learning, which consider fairness as binary, are challenged. This section introduces DiffusionWorldViewer, a novel framework with a Web UI that enables users to analyze the underlying worldviews of diffusion models and edit model outputs to align with their personal fairness perspectives, thus promoting a diverse understanding of fairness in AI technologies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Masonry for the Design of Barrel-Vaulted Flooring Systems</title>
<link href="https://hdl.handle.net/1721.1/157352" rel="alternate"/>
<author>
<name>Haile, Nebyu Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/157352</id>
<updated>2024-10-17T03:09:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Low-Cost Masonry for the Design of Barrel-Vaulted Flooring Systems
Haile, Nebyu Samuel
The world's population is projected to grow rapidly in urban areas, with a projected 2.5 billion more urban dwellers by 2050 (UN-DESA, 2019). This urban growth will notably concentrate in Less Economically Developed Countries (LEDCs), where 16 of the top 20 most populous cities are anticipated to be situated by 2100 (Hoornweg &amp; Pope, 2017). LEDCs face a critical challenge in meeting the demand for affordable housing due to various factors, notably the high material costs, which can make up to 90% of residential construction expenses (Meikle, 2011). Most multi-story housing in LEDCs relies on reinforced concrete frames with flat slabs. This structurally inefficient system heavily depends on imported cement and steel for many locations. Compounding this issue, in LEDCs, the construction sector contributes significantly to their annual carbon emissions, sometimes doubling the global average and exacerbating the climate crisis (Yokoo et al., 2016). Addressing the pressing need for affordable housing requires alternative, more efficient structural systems that utilize affordable and environmentally conscious materials.&#13;
&#13;
This thesis aims to address the challenge of affordable housing by proposing the implementation of unreinforced barrel-vaulted earthen floor systems as an alternative to conventional concrete flat slabs, which are often cost-prohibitive in LEDCs. While existing research predominantly focuses on thin concrete shells for vaulted floors, this study emphasizes earthen vaulted floor systems, utilizing locally available and cost-effective materials. Specifically, it analyzes the maximum spanning capacity of three shallow unreinforced earthen barrel-vaulted floor typologies, examining their associated costs and carbon footprints. Furthermore, the thesis investigates the feasibility of one of these typologies by constructing and evaluating a physical 3m span prototype subjected to international building code loads. The outcomes highlight the structural integrity, cost-effectiveness, and reduced carbon footprint of earthen vaulted floor systems, offering insights into a more environmentally conscious and economically feasible floor system typology for building construction in LEDCs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Matter of the Hold: Housing futures and the paradigm of the ship</title>
<link href="https://hdl.handle.net/1721.1/157351" rel="alternate"/>
<author>
<name>Donovan, Inge</name>
</author>
<author>
<name>Pankhurst, David</name>
</author>
<id>https://hdl.handle.net/1721.1/157351</id>
<updated>2024-10-17T03:44:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Matter of the Hold: Housing futures and the paradigm of the ship
Donovan, Inge; Pankhurst, David
Many of the port cities of North America are built upon ballast stones, discarded by ships after their transit across the Atlantic. Oftentimes, this material was sourced from waste, such as stone offcuts from quarrying, and transported across space and time, slipping through value systems; from waste, to weight, to commodity. In time, structures across the continent boasted chimneys or foundations that had begun their life in the distant granite quarries of Cornwall, and from bricks that had rounded Cape Horn - their material transience obscured by a perceived stability of form.&#13;
Buildings are usually seen as the endpoint of material flows, where they remain in intractable, fused assemblies until they reach obsolescence. This familiar pattern is currently playing out in the phased demolition of the Bunker Hill Public Housing Development, the largest affordable housing community on the East Coast. The BHHD can be seen in contrast to the Charlestown Navy Yard, an adjacent shipyard where centuries of investment have established a robust infrastructure of maintenance. We ask: how could the paradigm of the ship, and the creation of material strategies for large, complex assemblages funded by public spending be applied to housing in a resource constrained world?&#13;
In The Matter of the Hold, the demolition waste from Bunker Hill is inherited as ballast and transformed, a process made possible by the concept of the “building as hold.”&#13;
In light of the increasing shift towards buildings as storehouses of material to be held for future reuse, and as vessels of carbon sequestration, our thesis explores how design for the uneven, yet cyclical ebbs and flows of renewable resources erodes architecture’s traditionally rigid temporal boundaries of planning, construction, and occupancy, and produces temporally dynamic regimes of figure and form. The collection, administration and reconfiguration of waste materials results in the creation of new, regenerative forms of collective living that challenge the boom-and-bust logic of investment in public infrastructures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stories of the Sky</title>
<link href="https://hdl.handle.net/1721.1/157349" rel="alternate"/>
<author>
<name>Chen, Zhanyi</name>
</author>
<id>https://hdl.handle.net/1721.1/157349</id>
<updated>2024-10-17T03:01:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stories of the Sky
Chen, Zhanyi
My art practice probes how soft science fiction provides intervals to contemplate the tension among the relentless advancement of infrastructural technologies, their environmental and psychological repercussions, and the metaphors and culture in weather and environments. In this thesis, I explore such tension with a specialized focus on the sky via a series of artworks that engage with clouds, weather satellites, and human feelings. My experience receiving image signals from the Russian weather satellite Meteor-M2 has led me to understand the pervasive presence of satellites and their silent integration into, and control over, various environments—similar to numerous other contemporary infrastructures. The sky has never been merely a smooth surface but is striated with all kinds of machines, politics, and power dynamics. My thesis can be seen as exploring methods of coping as responses from an individual caught in such an intermingled environment, and as an inquiry into how we perceive things that are distant from us. Referring to soft science fiction approaches, I strategically misuse technologies to prioritize human subjectivity over technological functionality. In moments where the misused technologies cease to function, but to obscure, to resist, to complicate, to affect, I put the current dynamics between the self and technologies into play. Parallel to my artistic practice, I also take inspiration from elemental media studies for their broader theoretical discourse on the interplay between the environment and media. Media historian John Durham Peters argues for a more encompassing definition of media that includes environmental elements, including the sky, challenging the traditional dichotomy between nature and culture and the previous academic emphasis on culture over nature. This perspective allows for the exploration and appreciation of the sky’s cultural, emotional, and historical values which are just as important, if not more so, than any other conventional media, resonating with the intentions behind my artworks. Thus, “media” becomes a term that is semantically richer than it already is and requires a nuanced interpretation embracing all its connotations, and my thesis provides ways to explore this materially. By focusing on the sky as a juncture where nature and culture collide, my thesis advocates for a synthesized view that recognize the multifaceted narratives woven through the sky—stories of technology, of culture, of grand dreams and of small melancholy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Parameter Efficient Finetuning Techniques (PEFT) using Datamodels</title>
<link href="https://hdl.handle.net/1721.1/157345" rel="alternate"/>
<author>
<name>Chamdal, Harshal</name>
</author>
<id>https://hdl.handle.net/1721.1/157345</id>
<updated>2024-10-17T03:21:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Comparing Parameter Efficient Finetuning Techniques (PEFT) using Datamodels
Chamdal, Harshal
Advances in machine learning, particularly through algorithmic innovations and large datasets, have led to models with hundreds of billions of parameters. Deploying these models is challenging and costly, especially due to the extensive finetuning required. Parameter-efficient finetuning techniques (PEFT) have been proposed to address this issue by significantly reducing the number of trainable parameters, achieving comparable results to full-parameter finetuning. Despite widespread adoption, PEFT methods are often used interchangeably without considering their qualitative differences and performance under various data distributions. This thesis extensively compares three PEFT methods: LoRA, BitFit, and (IA)³, using the ModelDiff framework to identify and apply data interventions. Our analysis reveals that the performance of these methods varies widely with different interventions, with BitFit showing the most variance, while LoRA and (IA)³ demonstrate greater resilience. This study informs the selection and optimization of PEFT techniques based on specific NLP task requirements, balancing performance, computational efficiency, and robustness to text variations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gender Glitch Matrix: Queer Aesthetics and the Politics of Error in Digital Media</title>
<link href="https://hdl.handle.net/1721.1/157344" rel="alternate"/>
<author>
<name>Akdoğan, Merve</name>
</author>
<id>https://hdl.handle.net/1721.1/157344</id>
<updated>2024-10-17T04:10:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Gender Glitch Matrix: Queer Aesthetics and the Politics of Error in Digital Media
Akdoğan, Merve
Situated at the intersection of digital media studies, queer theory, and glitch art, this thesis critically examines the normative biases and centralization in artificial intelligence (AI) and, more specifically, machine learning systems as they relate to marginalized identities. Unlike conventional approaches that prioritize optimization and polishing of AI, this thesis introduces the notion of a glitch—a short-lived digital error—as both a metaphorical and an artistic technique that critically subverts societal norms. The thesis interrogates AI’s structure, dissecting it to reveal “black box” complexities to question the vulnerability of computational systems. It proposes an alternative approach that embraces error as a means of resistance, developing a critical commentary on technology production through artistic interventions. Grounded in Judith Butler’s “Matrix of Intelligibility,” the artistic interventions introduced in this thesis aim to craft a glitch aesthetic that integrates queer theoretical perspectives with practical machine learning applications. This thesis interrogates how AI models can potentially propagate entrenched societal norms about gender, what the political errors made by AI systems are and what can be the activist potential of technology in challenging these cisheteronormative renderings. Aiming to develop and test machine learning models for identifying bias in digital media, this research is organized into four sections, beginning with the development of a theoretical framework and a review of relevant literature on AI errors and glitch art. Subsequently, the thesis explores the design of glitch prototypes through training and testing machine learning models. Finally, through experiments using these methodologies, including archival work, media manipulation, and attribution studies with AI models, this thesis reveals the AI systems’ deficiencies as they relate to queer identities. This work underscores the transformative potential of integrating artistic techniques to subvert and reveal technological development. It envisions technology not merely as a mechanism for perfecting systems but as a powerful conduit for advocating a more inclusive future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Between City and Self: Reading Beirut in Mohamed Soueid’s Tango of Yearning</title>
<link href="https://hdl.handle.net/1721.1/157343" rel="alternate"/>
<author>
<name>Anouti, Ghida</name>
</author>
<id>https://hdl.handle.net/1721.1/157343</id>
<updated>2024-10-17T03:57:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Between City and Self: Reading Beirut in Mohamed Soueid’s Tango of Yearning
Anouti, Ghida
Set in Beirut in the aftermath of the Lebanese Civil War (1975-1990), the pseudo-documentary film Tango of Yearning (1998) follows the lives of several subjects who speak of love, loss, dreams, and cinema as they navigate their fragmented postwar city. Directed by underground Lebanese filmmaker Mohamed Soueid (b. 1959) and shot purely on video, the film is saturated with cinematic references, images of urban sites, sensual and religious symbols, and sociopolitical intimations. Soueid sees Tango of Yearning – the first in a trilogy titled Civil War – as an ‘obituary’ of his life prior to making this film. Hence, for him, the film is rooted in the past, yet I argue that it is a significant augury of Beirut itself as a palimpsest of urban memories sublimated by Soueid. This argument is nestled between Soueid’s assessment of his film as a personal work of cinema, and my own reception of it as symptomatic of Beirut’s history in the periods prior to, during, and after the Civil War.&#13;
Tango of Yearning is, at its core, a meditation on the city of Beirut as it transformed throughout various periods governed by the traumatic event of the Civil War. Through a close reading of the film, I reveal how an ostensibly private essay is also a medium for archiving memories either forgotten or suppressed by the nation’s contested amnesia of the war, while also investigating how the postwar city’s history intertwines with the filmmaker’s biography. A largely unrecognized yet significant contributor to the Arab world’s video and cinema scene, Soueid – an agent, actor, and narrator of the city – is one of the most sensitive chroniclers of life in Beirut during the 1990s and early 2000s. Weaving historical realism with fabulation to fill or distort representational lacuna, his film offers doubled lenses – one of his life and another of Beirut’s contemporary history. Through a chronological reading of an otherwise nonlinear film, I extract a history of Beirut in three stages: its cosmopolitan yet polarized 1960s with a brimming arts, film, and literature scene; its violent war characterized by sectarianism and fragmented nationalism; and its amnesic postwar era in which the film was created. Accordingly, I ask how Soueid’s private image-making apparatus draws an image of Beirut through his own autobiographical narration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Damp Skin: Portraits of Taiwanese Domesticity, Resilience, and Otherness</title>
<link href="https://hdl.handle.net/1721.1/157342" rel="alternate"/>
<author>
<name>Chan, Cheng-Hsin</name>
</author>
<id>https://hdl.handle.net/1721.1/157342</id>
<updated>2024-10-17T03:48:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Damp Skin: Portraits of Taiwanese Domesticity, Resilience, and Otherness
Chan, Cheng-Hsin
This thesis is an intricate exploration of Taiwanese life under the constant dampness, weaving together the present with historical threads and personal memories of home and motherhood alongside broader socio-historical narratives. It examines Taiwanese domesticity through the dual prisms of “dampness” and “enclosure failure” to reveal how these elements influence or fail to meet Taiwanese people’s physical comfort and needs. Central to this research is exploring the historical marginalization of the Taiwanese body in domestic spatial development under the influence of external powers.&#13;
&#13;
Damp Skin unfolds through three intertwined registers that offer diverse materials and perspectives spanning time and space, providing a layered understanding of Taiwanese history and contemporary experiences: I. Home, Memory, and Motherhood, II. Planetary Climate and Body, and III. Domesticity and Architectural Enclosure in Taiwan. This thesis argues the continuous repositioning of our bodies (ourselves and family) in response to external factors — climate, society, and power. It serves to revisit the past, document the present, and speculate on the future, enhancing our understanding of everyday life in Taiwan and exploring potential cultural adaptations. Each thread collects materials and offers distinct perspectives on Taiwanese identity and space’s historical and contemporary shaping. Together, they form portraits of the complexities and nuances of Taiwanese domesticity, resilience, and otherness, framed through the intimate and expansive lens of dampness and enclosure.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Cycles of aMaízing Things</title>
<link href="https://hdl.handle.net/1721.1/157341" rel="alternate"/>
<author>
<name>del Busto, Juan Manuel Chávez Fernández</name>
</author>
<id>https://hdl.handle.net/1721.1/157341</id>
<updated>2024-10-17T03:40:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Cycles of aMaízing Things
del Busto, Juan Manuel Chávez Fernández
Throughout this thesis, maíz becomes a trans-scalar agent of exchange across time, cultures, and territories. Maíz, as both a symbol and a subject, is intensely charged with tradition and disruption, operating within a jumbled feedback state that transcends myths and industry. The work situates my reading of the artwork, Río Revuelto by the Mexican artist José Chávez Morado (1949), as a guiding framework to approach a kaleidoscopic entanglement of different narratives. Considered under four different lenses: the cosmological, the national identity, the resistance, and the product, I argue for constant feedback between them for the re-transforming cycles of maíz. The crucial concern driving this exploration is how maíz and humans are ingrained into each other's systems — re-configuring methods, spaces, and forms of display. The display refers not only to maíz as a ‘product’ but as a continuous entity in transition, transforming and adapting to the social and cultural conditions where it circulates —whether through myth, ritual, portrayal, strategy of preservation, building typology, commodity, by-product, or history. The design approach is presented through performative artifacts that symbolize the systems through which maíz circulates. They are further represented in an essay film. Whether referencing myths, projections, displays, or products, the artifacts become mnemonic objects to think with—depicting the cycles of maíz as a world-building exercise. Maíz becomes the point that traces simultaneity in the history of humanity, representing a symbol eternally under construction. Acknowledging this monumental scale requires my work to be only a grain-sized glimpse of speculative potentials in design.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alive Scene: Participatory Multimodal AI Framework for Collective Narratives in Dynamic 3D Scene</title>
<link href="https://hdl.handle.net/1721.1/157340" rel="alternate"/>
<author>
<name>Cheng, Chi-Li</name>
</author>
<id>https://hdl.handle.net/1721.1/157340</id>
<updated>2024-10-17T03:17:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Alive Scene: Participatory Multimodal AI Framework for Collective Narratives in Dynamic 3D Scene
Cheng, Chi-Li
This thesis introduces "Alive Scene," an online participatory platform for recording dynamic 3D environments and building collective interpretations of objects, events, and atmospheres within them. For instance, a user can browse a recording of a room and describe objects or events to locate them; or select a time frame, adjust the camera angle, and add a comment to share a new narrative of the scene with others. Unlike traditional digital formats such as simple videos or 3D models, this platform is both three-dimensional and temporal at the same time, and the views are searchable using natural language sentences and sorted by relevance. By building the platform and testing it with human subjects, this thesis demonstrates that such a new participatory media of dynamic 3D environments fosters communal knowledge and enhances the spatial understanding of individual users. Alive Scene produces rich, semantic-level communication among users, akin to the dynamic propagation of cultural memes. The Alive Scene System integrates two advanced techniques: 3D scene reconstruction using Gaussian splatting, and semantic linking of human perceptions through the Contrastive Language-Image Pretraining (CLIP) model. These methods are currently among the most popular and efficient. The platform continually enriches its collection of users' views and interpretations through interactions with this semantic AI system, enabling the archiving of user inputs and suggesting new avenues for exploring diverse perspectives. The streamlined interaction interface promotes user engagement and facilitates the discovery of related views and perceptions. The user test employs a dynamic 3D scene of a student lounge, recorded at four different times, and involves 20 participants generating a total of 235 inputs. Four types of interactive behaviors were observed regarding users' views and interpretations: Disagreement, Simple Agreement, Sharing Perception by adding comments, and Adjusting Views. The analysis indicates evolutionary trends: Initially, users’ express disagreements and provide objective, general comments. As the platform gathers these inputs, a transition occurs where users begin sharing more subjective information and reinterpreting others' views. Eventually, users adjust camera angles when the captions are agreeable. Visualizations of this analysis illustrate that these dynamic behavioral changes facilitate the development of collective perception. For further investigations, this study could benefit from incorporating more elaborate 3D scenes, additional recording times, and a larger number of participants.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond the Bioclimatic Chart: An Automated Simulation-Based Method for Assessing Natural Ventilation and Passive Design Potential</title>
<link href="https://hdl.handle.net/1721.1/157339" rel="alternate"/>
<author>
<name>Herb, Svenja</name>
</author>
<id>https://hdl.handle.net/1721.1/157339</id>
<updated>2024-10-17T04:09:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond the Bioclimatic Chart: An Automated Simulation-Based Method for Assessing Natural Ventilation and Passive Design Potential
Herb, Svenja
Technological advancements in the building industry have significantly transformed climate and comfort control in buildings. This allows for air conditioning in deserts and heating in the Arctic, ensuring occupant comfort. This innovation, however, has contributed to a homogenization in architectural designs globally, from the hot climates of Mumbai to the cold environments of Boston, and moderate settings like London. Such uniformity often overlooks local climatic conditions, resulting in increased energy consumption and elevated greenhouse gas emissions. Climate-responsive design on the other hand creates solutions that leverage local climates—such as through natural ventilation and optimal solar gain management—to reduce energy consumption. Depending on climate and program, the coordinated use of these passive design strategies may or may not lead to indoor thermal comfort conditions without the need for an air-conditioning system. There are two primary approaches to explore the passive design potential of a building during schematic design: Bioclimatic chart and building energy modeling (BEM). The former method is a key feature in building science textbooks and is solely based on widely available local weather data. It provides general design advice without requiring previous knowledge or the need to describe the building program. BEMs facilitate detailed testing of how a building is operated and how the above listed passive design techniques can be combined to obtain the highest possible comfort conditions and energy savings. However, the use of BEM has traditionally been more complex and time-consuming to use as it requires significant knowledge of the underlying building physics and numeric methods to mimic them. This thesis evaluates the bioclimatic chart's accuracy in predicting overheating hours associated with various passive design strategies, through comparison with BEM data. Furthermore, it introduces a new simulation-based approach called “ECOmpass”. ECOmpass automates early-stage design simulations and offers design recommendations for passive strategies with just one click.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alternate Imaginaries for the Kinara: River Ravi’s Edge as a Threshold</title>
<link href="https://hdl.handle.net/1721.1/157338" rel="alternate"/>
<author>
<name>Khalil, Mahwish</name>
</author>
<id>https://hdl.handle.net/1721.1/157338</id>
<updated>2024-10-17T03:47:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Alternate Imaginaries for the Kinara: River Ravi’s Edge as a Threshold
Khalil, Mahwish
In the lower riparian landscape of Punjab, Pakistan, various communities confront the challenges of living within the active floodplain of river Ravi as it flows alongside the city of Lahore. These communities navigate the dissonances of the river’s edge—its Kinara, marked and molded by persistent colonial (mis)representations rooted in practices of erasure and division. Stepping away from historical depictions that have reduced the river to a mere resource for acquisition, this thesis engages with design and the oral tradition of storytelling, known as Qissa Khwani, to propose new modes of knowing, witnessing, and ultimately, cultivating alternative imaginaries for Ravi. This thesis seeks to illuminate the overlooked narratives of a river and its communities by drawing inspiration from, and centering the voices and legacies of, those most impacted by regressive depictions of a linear floodplain. It stages newer encounters and engagements with Ravi and its communities by stitching together stories of numerous community members, the dwellers, the boatmen, and the civil defense divers, actively defying and transforming the seemingly static Kinara—their home—through cultural and economic production. These pluralistic alternatives serve as a deliberate departure from the current large-scale, mega-urban development projects planned for the riverfront, which not only overlook the communities living along its banks but also employ idealized depictions of Ravi to attract capital. Finally, this thesis questions how the river's edge can be remapped to allow for the dismantling of top-down visions while addressing an urgency embodied within the shallow, receding flows of a polluted river, whose uncertain future remains contingent on distinct lines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disrupting Monocultural Tendencies through&#13;
Multimodal Montage</title>
<link href="https://hdl.handle.net/1721.1/157337" rel="alternate"/>
<author>
<name>Singha, Mrinalini</name>
</author>
<id>https://hdl.handle.net/1721.1/157337</id>
<updated>2024-11-12T18:32:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Disrupting Monocultural Tendencies through&#13;
Multimodal Montage
Singha, Mrinalini
This thesis contends with the pervasive impact of monocultural tendencies as manifested in the political, cultural, and media landscapes of contemporary India, particularly focusing on the unfolding context of 2024. Amidst an intensifying crisis marked by polarization, historical erasure, and the rise of hegemonic nationalism, this thesis posits art, particularly through the framework of `multi-modal montage,' as a agent of political disruption for `redistributing the sensible.' Tracing the aesthetic and political evolution of montage from its early 20th-century origins in Soviet cinema to its contemporary forms, the thesis outlines the transition from montage defined by collision and conflict to the soft, spatial, and interactive practices of figures such as Nam June Paik and Harun Farocki. It further investigates how `surface tension' and `unquiet objects' manifest within the multi-modal montage in the works of artists like Nalini Malani, Krzysztof Wodiczko, Shilpa Gupta and Nida Sinnokrot.&#13;
&#13;
As an Indian artist, the author situates her own practice within this discourse, highlighting projects such as `The Whistleblower' (2023), a tangible archive within an everyday object, and `A Mystery for You' (2023-24), a fact-checking game that merges a tangible interface with a large language model (LLM). These works exemplify the thesis's argument that artistic interventions can critically challenge and reframe dominant sociopolitical narratives, offering new perspectives and resistances against the monocultural hegemonies. Extending this analysis, the author discusses her exhibition 'Forensic Artifacts of a Democracy in Crisis' (2023) as an operative space. Through a curated assemblage of works, the exhibition provided a physical space for interaction, reflection, and conversation, enabling audiences to engage with the themes of the thesis viscerally. In all, this thesis argues for the critical role of art in challenging memory and forgetting, from fabricated histories to the fall and rise of monuments. From the polarization of media to the flattening of identities, of echo-chambers and absences and grand narratives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing (with) Trees: Active Agents in Architectural Production</title>
<link href="https://hdl.handle.net/1721.1/157335" rel="alternate"/>
<author>
<name>Garinois, Laura-India</name>
</author>
<id>https://hdl.handle.net/1721.1/157335</id>
<updated>2024-10-17T03:37:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing (with) Trees: Active Agents in Architectural Production
Garinois, Laura-India
This thesis embarks on a multifaceted exploration of the relationship between urban trees, architectural representation, and the legal framework governing their existence, with a particular focus on tree hearings in Boston as a platform for this study. Against the backdrop of capitalist influences shaping urban landscapes, standardized modes of representation often prioritize economic interests, relegating urban trees to two-dimensional depictions in architectural drawings. Such representations obscure the rich complexity and ecological significance of trees, thereby shaping design choices that threaten their vitality. Amidst these challenges, Massachusetts has initiated efforts towards granting public trees legal recognition, providing a foundation upon which this study builds on to advocate for further improvements in tree rights and protections. This encompasses tree hearings, where developers and residents seek permission for the removal of healthy public trees, involving municipal authorities, tree wardens, and local communities. Through extensive dialogue with experts and stakeholders dedicated to this cause, the thesis identifies loopholes within existing laws and institutional frameworks, leading to the development of a tree appraisal system that employs alternate representations of trees that encourage new ways of valuing their role within architectural thinking and production. The exploration examines how a more nuanced collaboration with trees in design processes can enhance the value of architecture, and how design can in turn contribute to the protection of trees. Ultimately, the goal is to enrich tree hearing conversations by recognizing them as reflections of a larger climate conversation around trees and nature. By intervening in their legal site and imagination, the thesis fosters a more inclusive dialogue that transcends the binary decision of whether to cut down a tree or not.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>What is Ecology?</title>
<link href="https://hdl.handle.net/1721.1/157334" rel="alternate"/>
<author>
<name>James, Aubrie R. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157334</id>
<updated>2024-10-17T03:03:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">What is Ecology?
James, Aubrie R. M.
There are many ways to try to make sense of that which is. Ecology, which deals with organisms in relation to their environments, makes sense of that which is through the study of relations among and between organisms and their environments. Modern ecology is predominantly understood as a scientific enterprise. However, science as a methodology is too often aligned and entangled with extractive, capitalist logics: the cycle of enclosure–dispossession-scientific practice-imperial expansion not only undergirds and defines the ecological crises of our times but forecloses our ability to conceive of the diverse ways in which life is configured. For ecology, this is a predicament of ethics, yes, but also of a cleareyed understanding of what is (and our relationship to it). The urgent question for ecologists given this predicament is to ask is how to break out of this cycle. This thesis explores the potential of building an artistic practice to question the forms of ecology: how it is conducted, how it is communicated, and what it produces. Drawing inspiration variously from feminist, postcolonial, and ecosocial art, media theory, and philosophy, this thesis probes the limits of ecology under the suspicion that the point of leverage for change is to differently enact how we think, make, and do in relation to the world in, around, and constituting us.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Much Does It Really Cost? A Dynamic Approach to Building Retrofit Costs for Decarbonization Pathways</title>
<link href="https://hdl.handle.net/1721.1/157333" rel="alternate"/>
<author>
<name>Kirkeby, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/157333</id>
<updated>2024-10-17T03:36:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Much Does It Really Cost? A Dynamic Approach to Building Retrofit Costs for Decarbonization Pathways
Kirkeby, Amanda
Carbon emissions are driving the planet out of its delicate Goldilocks balance. Evidence and the call-to-action date back to 1896 with Swedish scientist Svante Arrhenius and his seminal paper that first predicted the effect of carbon dioxide on the global temperatures. With the Intergovernmental Panel on Climate Change (IPCC) goal of global net zero emissions by 2050, the urgency is stronger than ever. An ever-growing number of municipalities are setting pledges to do their part, often without a concrete plan. With buildings accounting for 40% of total global emissions, building retrofits are a key component to these pathways to zero carbon. Urban building energy modeling (UBEM) research efforts have developed physics-based decision-making tools to define city-scale technology pathways to reach climate goals. However, a crucial question in making these pathways actionable has been largely neglected: how much does it really cost? The scarcity of contemporary cost data and methods for cost prediction at the urban scale makes this question difficult, and further questions around equitable incentive programs nearly impossible to answer. This work demonstrates the concept and relevance of implementing a dynamic cost model in the UBEM context. Several cost models are applied to a case study of 13,000 residences in Oshkosh, WI to predict costs for homeowners to retrofit their homes with three different upgrade packages. A willingness to pay analysis is then performed with upfront cost predictions from different models, illustrating the impact a more robust cost model may have in providing more realistic predictions of an upgrade strategy’s techno-economic success. Through its compatibility with existing UBEM frameworks and local input costs, the dynamic building upgrade cost model hosts the potential to further support municipalities in developing economically feasible building retrofit strategies for decarbonization pathways.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pathways to Net Zero: Financing Strategies For Low-Income Homeowners</title>
<link href="https://hdl.handle.net/1721.1/157332" rel="alternate"/>
<author>
<name>Moore, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/157332</id>
<updated>2024-10-17T03:48:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pathways to Net Zero: Financing Strategies For Low-Income Homeowners
Moore, Lauren
Housing retrofits are crucial for accomplishing national housing sector decarbonization goals. Single measure retrofit improvements are not sufficient for low-income homes which are often in less-thanoptimal condition and are subsequently uncomfortable and expensive to operate. Comprehensive retrofit approaches are necessary to achieve the energy efficiency targets for the aging housing stock. Historic educational and economic barriers pose challenges for incentivizing low-income homeowners to retrofit their homes. Proactive strategizing that considers both educational and economic factors are needed to see increased retrofit adoption amongst these groups. Policy makers need an understanding of retrofit impact for more effective resource allocation and homeowners need better incentives, and tools to conceptualize the benefits, time commitment and cost associated with deep retrofits. To address this problem, we present a retrofit pathway modeling framework to accurately predict the time required to achieve comprehensive retrofits for the homeowner. Taking retrofit cost and annual energy saving into account, we are proposing a new Government sponsored and led financing program inspired by the successful 401(k) retirement plans and level 529 saving programs, which offers an either 2x or 3x match to the annual investment the homeowner commits to saving each year to ensure low-income homeowners are accounted for in the journey to building sector decarbonization by 2050 and beyond. For a case study home in the Grove Park neighborhood located in Atlanta, Georgia, hot water heat pump retrofits are the most impactful on building annual energy use but retrofits that have low cost and short payback periods such as installing LED light fixtures and low-flow showerheads are the recommended have the largest potential for shortening the years required to achieve comprehensive retrofits and therefore, are recommended for policy makers to incentivize in the community. Strategic financing can be used to ensure a financially feasible pathways for homeowners with varying annual budget amounts. For the example home, the program allows homeowners who invest only $50 annually to achieve comprehensive retrofits four times faster than if they only utilize existing incentive programs. Individual building energy simulation combined with socioeconomic analyses are needed to meet the needs of diverse low-income communities across the United States.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Liquid to Stone: Reimagining the design of concrete structures for reuse</title>
<link href="https://hdl.handle.net/1721.1/157330" rel="alternate"/>
<author>
<name>Donovan, Inge</name>
</author>
<author>
<name>Schnitzler, Jenna</name>
</author>
<id>https://hdl.handle.net/1721.1/157330</id>
<updated>2024-10-17T03:01:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Liquid to Stone: Reimagining the design of concrete structures for reuse
Donovan, Inge; Schnitzler, Jenna
Every year, 360 million metric tons of concrete construction waste are sent to landfill in the United States, in large part originating from the demolition of economically obsolete buildings. Meanwhile, global demand for new concrete is accelerating – in 2021, the production of new concrete was responsible for up to 9% of global CO2e emissions, and our dependence on concrete is only expected to rise over the next 50 years.&#13;
Concrete’s ubiquity is reinforced by its liquidity; it is simultaneously invisible and ever-present, undergirding global modernization through its cheap, local nature and its ability to take on any form in quick order. However, design with concrete has remained mostly unchanged, with inefficient, irreversibly fused structures cast in place to meet quickly changing programmatic needs, few of which survive longer than 30-50 years. Due to its careless application, concrete is perceived as a low-value material, and is therefore used wastefully, discarded quickly, and usually downcycled. The monolithic and inflexible nature of reinforced concrete structures perpetuate concrete’s culture of obsolescence and demolition.&#13;
To meet emissions targets and demand for building, we need to close the loop by developing a circular economy of structural materials. Instead of reusing salvage materials that have already entered the waste stream, this thesis confronts the design of new concrete structures directly, presenting the design of and methodology behind Pixelframe, a precast kit of parts for reconfigurable concrete structures. In a future where buildings are increasingly seen as stockpiles for subsequent reuse, the reinvention of concrete structures is an imperative that presents an opportunity for a new tectonic – concrete is no longer a liquid poured once and cured on site, but instead is a material more akin to stone, retaining value across multiple lifespans.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric PAINTOVER: Generating Design Models via Image Encoders and Latent Trajectories</title>
<link href="https://hdl.handle.net/1721.1/157329" rel="alternate"/>
<author>
<name>Tas, Demircan</name>
</author>
<id>https://hdl.handle.net/1721.1/157329</id>
<updated>2024-10-17T03:49:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Parametric PAINTOVER: Generating Design Models via Image Encoders and Latent Trajectories
Tas, Demircan
Design is an iterative process where physical or virtual prototypes are created, rendered, evaluated and modified repeatedly. Sketches and direct manipulations are made on the rendered or fabricated mediums to create and communicate intended changes. Parametric design is a prominent paradigm in design and architecture where hand crafted functions map input parameters to a design space to rapidly generate samples. Direct modifications often lead to novel states outside the design space of a parametric model. Moreover, Parametric models are not cyclic, their input and output spaces are not interchangeable without human intervention. Models must be reconfigured to accommodate out-of-domain changes, preventing parametric design tools from being integrated into early phases of design where changes are commonplace. We propose latent spaces of large pre-trained auto-encoders as shared, design spaces for translating states of design among mediums and dimensions. We implement rendering and image encoding to use images as an interface among the outputs and inputs of the model, enabling users with direct modification via painting over. We use sketches, renderings, and 3d models for sampling latent spaces. We share experiment results acquired through linear interpolation and a custom spline implementation in latent spaces. We present samples from found latent trajectories matching to samples from ground truth parametric design models. We find that trajectories exist in latent spaces that approximate axes in parameter spaces. Using images and 3d models as input and output, we provide a cyclic, software agnostic tool for design generation with parameter approximation capabilities that generalize. We provide findings from experiments and present a software repository for parametric paintover including our sketch augmentation model Inverse Drawings and many-dimensional latent spline implementation L-NURBS.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementation of Machine Connectivity in Low-Volume High&#13;
Variety Manufacturing Line</title>
<link href="https://hdl.handle.net/1721.1/157327" rel="alternate"/>
<author>
<name>Pal, Kanishk</name>
</author>
<id>https://hdl.handle.net/1721.1/157327</id>
<updated>2024-10-17T03:09:15Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Implementation of Machine Connectivity in Low-Volume High&#13;
Variety Manufacturing Line
Pal, Kanishk
This thesis provides a comprehensive analysis and implementation plan for enhancing machine connectivity within a manufacturing facility at SLB. The study investigates the existing limitations of the facility's connectivity infrastructure and proposes an advanced connectivity software suite as a solution, presenting a compelling business case for its implementation. The software’s scope involved DNC (direct numerical control), allowing for line-by-line feeding of CNC code to machine controllers, as well as machine data collection for real-time shop floor monitoring. The research emphasizes the development and implementation of an advanced network infrastructure designed to improve efficiency, security, and data handling capabilities. There is discussion regarding cybersecurity practices, specifically those related to industrial control systems that leverage CNC machining processes. The software implementation process is detailed, highlighting the necessary steps and information required for successful integration. These include: 1) securing connection to critical CNC machine controllers, 2) acquisition of hardware including local server and network switch, 3) server bring-up through remote imaging and installation of standard monitoring tools and 4) implementation of software on edge devices for CNC file transfer and machining data collection. Additionally, the thesis discusses the limitations encountered during implementation and outlines future steps to address these challenges.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capitalization of electric railways</title>
<link href="https://hdl.handle.net/1721.1/157307" rel="alternate"/>
<author>
<name>Zee, J. Zohn.</name>
</author>
<author>
<name>Zi, Su.</name>
</author>
<id>https://hdl.handle.net/1721.1/157307</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1915-01-01T00:00:00Z</published>
<summary type="text">Capitalization of electric railways
Zee, J. Zohn.; Zi, Su.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1915
</summary>
<dc:date>1915-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dietary fatty acids, prostaglandins and infectious and endotoxin challenges in guinea pigs</title>
<link href="https://hdl.handle.net/1721.1/157304" rel="alternate"/>
<author>
<name>Mascioli, Edward A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157304</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Dietary fatty acids, prostaglandins and infectious and endotoxin challenges in guinea pigs
Mascioli, Edward A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1984; Vita.; Includes bibliographical references.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Telecommunication industry in Japan : comparative study of telephone market history between U.S. and Japan.</title>
<link href="https://hdl.handle.net/1721.1/157302" rel="alternate"/>
<author>
<name>Yamanouchi, Ichiro.</name>
</author>
<id>https://hdl.handle.net/1721.1/157302</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1976-01-01T00:00:00Z</published>
<summary type="text">Telecommunication industry in Japan : comparative study of telephone market history between U.S. and Japan.
Yamanouchi, Ichiro.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1976; Bibliography: leaves 153-155.
</summary>
<dc:date>1976-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steady spinning of synthetic silk-like fibers and transient filament stretching of semi-dilute and concentrated polymeric fluids</title>
<link href="https://hdl.handle.net/1721.1/157300" rel="alternate"/>
<author>
<name>Brauner, Octavia Flora,
            1975-</name>
</author>
<id>https://hdl.handle.net/1721.1/157300</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>2001-01-01T00:00:00Z</published>
<summary type="text">Steady spinning of synthetic silk-like fibers and transient filament stretching of semi-dilute and concentrated polymeric fluids
Brauner, Octavia Flora,
            1975-
Thesis: S.M., Massachusetts Institute of Technology, Department of Chemical Engineering, 2001; Includes bibliographical references (p. 111-115).
</summary>
<dc:date>2001-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inosine-containing mRNA induces an innate immune response and is translated with lower efficiency</title>
<link href="https://hdl.handle.net/1721.1/157259" rel="alternate"/>
<author>
<name>Bao, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/157259</id>
<updated>2024-10-10T03:01:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inosine-containing mRNA induces an innate immune response and is translated with lower efficiency
Bao, Caroline
Inosine is a nucleoside formed by deamination of adenosine by adenosine deaminases acting on RNA (ADAR). ADAR editing activity is known to play a key role in modulating the host cell’s immune response to RNA. Here, we specifically study the effect of the presence of inosine in RNA by generating an inosine-containing reporter mRNA sequence. We also generated mRNA sequences that contained pseudouridine, an RNA modification known to decrease immune response to in vitro transcribed (IVT) mRNA and elevate the expression of the encoded gene, to examine the interaction between pseudouridine and inosine modifications. &#13;
While A-to-I editing activity is required for endogenous RNA to evade the innate immune response, our results show that inosine-containing IVT RNA induces an elevated immune response and is translated at a lower efficiency. This effect is dominant over pseudouridine modification, such that mRNAs containing both pseudouridine and inosine modifications still potently activate the innate immune response and exhibit a loss of translation. These results point to the potent immunostimulatory effects of inosine in transfected IVT mRNA. This elevated immune response is likely receptor-specific and we have demonstrated that it cannot be attributed to the sensors RIG-I, MDA5, TLR3, or PKR.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Technology Adoption Process in Accounting and Finance Using Systems Thinking Methods</title>
<link href="https://hdl.handle.net/1721.1/157257" rel="alternate"/>
<author>
<name>Chun, Albert Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157257</id>
<updated>2024-10-10T04:07:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Improving Technology Adoption Process in Accounting and Finance Using Systems Thinking Methods
Chun, Albert Y.
In the era of digital transformation, Accounting and Finance (A&amp;F) functions face the challenge of making well-informed decisions about which technologies to adopt, which processes to prioritize, and why. These decisions require stakeholders to carefully evaluate available options, assess their implications and tradeoffs, and align diverse preferences to make well-supported investment choices. Conducting this process in a siloed and unstructured manner can lead to inefficiencies.&#13;
This study explores the application of Systems Thinking (ST) and Systems Engineering (SE) methods, developing an integrated framework that combines Rich Picture, Object-Process Diagram (OPD), Design Structure Matrix (DSM), and Multi-attribute Tradespace Exploration (MATE) to enhance the technology adoption decision-making process within A&amp;F functions. The focus is on Internal Audit (IA) as a case study for a simplified model and demonstration. While empirical data collection and hypothesis testing were not conducted due to data and time constraints, qualitative insights were gathered from industry practitioners.&#13;
Key findings suggest that the integrated framework can potentially reduce the time and effort needed to reach technology adoption decisions. Providing a structured and comprehensive approach ensures that the decision-making process is more holistic, unbiased, and quantifiable. This can also offer post-implementation benefits, as the technologies adopted align better with the organization’s requirements and preferences, resulting in improved efficiency and effectiveness.&#13;
This study extends the practical application of ST methodologies into A&amp;F. By presenting this integrated framework, it contributes to the foundation for future research on applying ST to improve the technology adoption decision-making in A&amp;F.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Red Teaming Language Conditioned Robotic Behavior</title>
<link href="https://hdl.handle.net/1721.1/157255" rel="alternate"/>
<author>
<name>Abhangi, Nishant</name>
</author>
<id>https://hdl.handle.net/1721.1/157255</id>
<updated>2024-10-10T04:00:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Red Teaming Language Conditioned Robotic Behavior
Abhangi, Nishant
Natural language instruction following capabilities are important for robots to follow tasks specified by human commands. Hence, many language conditioned robots have been trained on a wide variety of datasets with tasks annotated by natural language instructions. However, these datasets are often limited in their size and hence the distribution and nature of the instructions given by real world users might be different from that in the datasets. This makes it unclear how these robots will perform in real world environments. Hence, a large scale evaluation with diverse instructions is needed to benchmark the performance of these robots. However, using humans to collect more annotations is prohibitively expensive. We show that recent large language models provide a scalable and inexpensive way to do such an evaluation. Moreover, there is a large performance drop in robots when evaluated on this larger set of instructions. We also show that we can use different prompts to LLMs to control properties such as diversity of the generated instructions.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Instrumenting Observability in a Decentralized Microservice Architecture</title>
<link href="https://hdl.handle.net/1721.1/157254" rel="alternate"/>
<author>
<name>Liu, Helen X.</name>
</author>
<id>https://hdl.handle.net/1721.1/157254</id>
<updated>2024-10-10T03:01:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Instrumenting Observability in a Decentralized Microservice Architecture
Liu, Helen X.
Software systems have increased in complexity over time, and with this increased complexity has come an increased need to keep these systems organized and functioning efficiently. Observability is closely attached to ensuring this correct and effective system function. Without system monitoring, it is difficult to pinpoint when errors occur and correct them at their sources. Monitoring systems also helps to understand a system from the outside by allowing developers to ask questions about the system’s state and function without needing to know the details of what comprises the system’s internal behavior. While there are existing solutions for observability frameworks, these solutions do not target microservice architectures, which are used more and more with expansive code bases, such as those likely to be employed in an industry environment. They also require extensive configuration to be fully integrated with a pre-existing system. As such, the challenge lies primarily in adapting observability solutions to a decentralized, microservice architecture found in an industry setting. The existing solutions also come with advantages and disadvantages for different situations, so they are often incomplete in addressing an entire system’s needs. The integrated system created here satisfies our system’s requirements of a consolidated observability platform while also enabling future customizations, thereby allowing problems to be identified more quickly and proactively.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SongGen: Framework for Controllable AI Song Generation through Interactive Songwriting and Artist Emulation</title>
<link href="https://hdl.handle.net/1721.1/157249" rel="alternate"/>
<author>
<name>Arora, Ajay</name>
</author>
<id>https://hdl.handle.net/1721.1/157249</id>
<updated>2024-10-10T03:36:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">SongGen: Framework for Controllable AI Song Generation through Interactive Songwriting and Artist Emulation
Arora, Ajay
We propose SongGen, an AI-based song-writing and song co-creation framework. Building upon existing AI tools like Suno.ai, SongGen features a chat interface with a trained AI songwriter assistant, emulating the traditional back-and-forth of human collaboration. The system offers enhanced capabilities for greater control over the songwriting process, including concept ideation, lyric generation and editing, real-time song generation, and granular instrumental specification. Comparative evaluations demonstrate SongGen’s superiority in key metrics such as steerability, expressiveness, personalization, and user satisfaction. We also present an extension of the SongGen framework for artist emulation and on-demand song generation. Future development aims to incorporate voice-based interaction and real-time voice conversion, enabling music artists to guide fans in creating personalized songs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hofstadter Physics and Composite Fermionic Phase in Moiré Systems</title>
<link href="https://hdl.handle.net/1721.1/157248" rel="alternate"/>
<author>
<name>Ding, Shuhan</name>
</author>
<id>https://hdl.handle.net/1721.1/157248</id>
<updated>2024-10-10T04:07:10Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Hofstadter Physics and Composite Fermionic Phase in Moiré Systems
Ding, Shuhan
This thesis explores the intricate electronic phenomena in Moiré systems, particularly focusing on twisted bilayer transition metal dichalcogenides (TMD). These systems, with their unique superlattice structures and strong electron correlations, provide fertile ground for investigating novel quantum states. A key focus is on understanding Hofstadter physics and the emergence of composite fermion phases in these materials. In this work, we first develop a continuum model to describe the low-energy electronic structure of twisted TMD bilayers, emphasizing the role of the Moiré superlattice in modifying the band structure and introducing non-trivial topological properties. We analyze the resulting Hofstadter spectrum under an external magnetic field, revealing the rich fractal pattern and the impact of valley polarization induced by the magnetic field. Building on this framework, we delve into the concept of composite fermions, particularly in the context of the fractional quantum Hall effect (FQHE). We extend Jain’s composite fermion theory and the Chern-Simons field theory to Moiré TMD systems, proposing the existence of an anomalous composite fermion liquid state at half-filling. Through a detailed mean-field analysis, we demonstrate that this state, characterized by a strong valley polarization and an effective magnetic field arising from Berry curvature, could be energetically favored under certain conditions. Our findings suggest that Moiré TMDs are promising candidates for realizing fractional Chern insulators and other exotic quantum phases, opening up new avenues for experimental exploration and potential applications in quantum technology.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Condensed Buck-Boost Switched Capacitor Converter for&#13;
Efficient Voltage Distribution in Electrified Aircraft</title>
<link href="https://hdl.handle.net/1721.1/157247" rel="alternate"/>
<author>
<name>Aron, Aklilu</name>
</author>
<id>https://hdl.handle.net/1721.1/157247</id>
<updated>2024-10-10T03:30:56Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Condensed Buck-Boost Switched Capacitor Converter for&#13;
Efficient Voltage Distribution in Electrified Aircraft
Aron, Aklilu
Switched capacitor converters are a category of power electronic converters that harness the significantly improved energy density of capacitors as opposed to that of their conventional, inductor-based counterparts to reap benefits in terms of efficiency, size, and utilization. This work presents the analysis, design, construction, and evaluation of one such converter, inspired by the flying capacitor multilevel topology and referred to as a condensed buck-boost converter. This converter is designed and built for an application as the interface between the battery voltage and DC bus on partially electrified aircraft, where the advantages of its ability to step up/down voltage in an efficient and lightweight fashion can be fully realized. In order to be implemented in hardware for the first time, this work utilize new monolithic, bidirectional GaN FETs, whose reverse voltage blocking capabilities open new possibilities for a converter design that wastes less power and occupies less board area. This converter is compared with others that perform similar functions to showcase the benefits that this topology has to offer.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cellulose Nanofoams: 3D Printing and Characterization</title>
<link href="https://hdl.handle.net/1721.1/157245" rel="alternate"/>
<author>
<name>Padia, Vineet</name>
</author>
<id>https://hdl.handle.net/1721.1/157245</id>
<updated>2024-10-10T03:44:40Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cellulose Nanofoams: 3D Printing and Characterization
Padia, Vineet
In recent years, the advancement in cellulosic nanofoams has been considerable. Yet, their customization potential for diverse application requirements has been constrained by reproducibility challenges. Our research, therefore, focused on two primary objectives: enhancing the thermal regulation capabilities and mechanical properties of cellulose nanofibrils (CNF) nanofoams, and developing a reproducible methodology for printing customized three-dimensional (3D) structures using direct-ink-write (DIW) technology and molding.&#13;
&#13;
We developed composite nanofoams using TEMPO-modified cellulose nanofiber (TCNF). The resultant composite nanofoams showcased remarkable properties such as ultra-low thermal conductivity, low density, outstanding flexibility, and infrared shielding capabilities.&#13;
&#13;
In a bid to create robust and environmentally friendly nanofoams, we employed a crosslinking process with CaCl2. The crosslinked nanofoams were extraordinarily lightweight yet boasted superior mechanical properties, significantly amplified by the crosslinker. Remarkably, these freeze-dried T-CNF/CaCl2 nanofoams maintained their form and demonstrated admirable flexibility, even when subjected to weight exceeding thousands of times their own. Furthermore, transient characterization confirmed their excellent thermal insulation capabilities.&#13;
&#13;
In conclusion, our research has pioneered the fabrication of sustainable, high-stability cellulose nanofoams. We have significantly enhanced the thermal management capabilities and mechanical performance of these nanofoams, marking a remarkable advancement in the field. The demonstrated sustainability, biocompatibility, ultra-light weight, high porosity, and deformability of the resultant nanofoams suggest considerable potential for diverse applications, including thermal insulation, shock and vibration damping, as well as tissue engineering.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Machine Connectivity Guidelines for Production&#13;
Floor</title>
<link href="https://hdl.handle.net/1721.1/157244" rel="alternate"/>
<author>
<name>Sehnawi, Kenan Hayel</name>
</author>
<id>https://hdl.handle.net/1721.1/157244</id>
<updated>2024-10-10T03:01:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development of Machine Connectivity Guidelines for Production&#13;
Floor
Sehnawi, Kenan Hayel
This thesis introduces and uses a standardized method for assessing machine connectivity at manufacturing facilities and develops a roadmap for an organization looking to implement connectivity at its facilities. As technology rapidly advances and Industry 4.0 takes hold of manufacturing worldwide, it is essential for manufacturing companies to utilize the latest technology to maintain a competitive advantage by optimizing operations, improving productivity, and increasing throughput. In this work, an overview of machine connectivity and its benefits are presented, and technologies and security measures used for connectivity are explored. Upon compilation of this information, a comprehensive rubric was developed with six weighted connectivity criteria, each scored from 0 (no progress) to 4 (fully complete), from which a total connectivity score can be computed. The rubric serves as a guiding tool for gauging a manufacturing facility’s level of maturity with regards to connectivity, and helps identify areas of need both within a facility and within an organization as a whole. The connectivity levels of six different manufacturing facilities were assessed using the rubric. The results were compiled to understand the development of connectivity at different facilities across the organization. The learnings from this analysis are used to develop guidelines as the organization continues its push towards full connectivity across all of its facilities. The next steps in this initiative are to: 1) utilize the developed rubric to assess connectivity at all of its manufacturing facilities, 2) identify facilities in need of the most resources in order to plan and execute connectivity, and 3) encourage collaboration between facilities to expedite the connectivity implementation process.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cycle Time Reduction for CNC Machining Workcells in High-Mix Low-Volume Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/157243" rel="alternate"/>
<author>
<name>Sun, Brandon Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/157243</id>
<updated>2024-10-10T04:03:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cycle Time Reduction for CNC Machining Workcells in High-Mix Low-Volume Manufacturing
Sun, Brandon Christopher
The demand for the product under investigation exceeds the available manufacturing capacity, with the CNC milling workcell identified as the bottleneck operation. This research, conducted in an active, high-mix, low-volume production environment, focuses on evaluating and implementing improvements to CNC machining parameters to enhance the workcell's capacity. Key areas of investigation include machining speeds and feeds, depth of cut, machine settings, toolpath strategies, stepover percentages, and alternative tooling. The study specifically targeted the initial roughing operation, which uses a feed mill and is the longest milling process. Addressing the challenges of high mix and low volume, the research successfully optimized machining and CNC programming parameters, reducing total machining cycle times by 25% and resulting in a 33% increase in throughput. Additionally, the methodologies and findings from this work have provided a framework for implementing further milling process improvements outside of the roughing operation, demonstrating their applicability to similar production scenarios.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Execution of a Testing Strategy for Omnidirectional Wheels</title>
<link href="https://hdl.handle.net/1721.1/157242" rel="alternate"/>
<author>
<name>Donnellan, Michael J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157242</id>
<updated>2024-10-10T03:10:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development and Execution of a Testing Strategy for Omnidirectional Wheels
Donnellan, Michael J.
Omnidirectional wheels enable robots to achieve holonomic motion; however, this often comes at the cost of increased rolling resistance compared to traditional caster wheels. The rolling resistance in omnidirectional wheels is higher than in many other wheels due to several factors including an irregular tread shape, material compliance, and friction in the bushinglike cross rollers during lateral motion. Testing standards exist for characterizing the rolling resistance, compressive strength, and other attributes of commonly used wheels such as caster wheels. However, there are no comprehensive testing standards or research that broadly characterize the performance of omnidirectional wheels. Here, test methods are described for characterizing the load relaxation, stiffness, and rolling resistance of omnidirectional wheels, and the results from these tests are presented. Test apparatuses for static loading and rolling resistance were created. Test results were analyzed to determine important factors for determining the ultimate compressive strength in static loading and the rolling resistance coefficient of an array of omnidirectional wheels, and results indicate wheel manufacturing methods and materials are the most important factors for determining these responses.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior</title>
<link href="https://hdl.handle.net/1721.1/157241" rel="alternate"/>
<author>
<name>Lee, Eunhae</name>
</author>
<id>https://hdl.handle.net/1721.1/157241</id>
<updated>2024-10-10T04:08:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Power of Perception in Human-AI Interaction: Investigating Psychological Factors and Cognitive Biases that Shape User Belief and Behavior
Lee, Eunhae
This thesis investigates the psychological factors that influence belief in AI predictions, comparing them to belief in astrology- and personality-based predictions, and examines the "personal validation effect" in the context of AI, particularly with Large Language Models (LLMs). Through two interconnected studies involving 238 participants, the first study explores how cognitive style, paranormal beliefs, AI attitudes, and personality traits impact perceptions of the validity, reliability, usefulness, and personalization of predictions from different sources. The study finds a positive correlation between belief in AI predictions and belief in astrology- and personality-based predictions, highlighting a "rational superstition" phenomenon where belief is more influenced by mental heuristics and intuition than by critical evaluation. Interestingly, cognitive style did not significantly affect belief in predictions, while paranormal beliefs, positive AI attitudes, and conscientiousness played significant roles. The second study reveals that positive predictions are perceived as significantly more valid, personalized, reliable, and useful than negative ones, emphasizing the strong influence of prediction valence on user perceptions. This underscores the need for AI systems to manage user expectations and foster balanced trust. The thesis concludes with a proposal for future research on how belief in AI predictions influences actual user behavior, exploring it through the lens of self-fulfilling prophecy. Overall, this thesis enhances understanding of human-AI interaction and provides insights for developing AI systems across various applications.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Emissions and Costs of Geologic Hydrogen: An Integrated Lifecycle Emissions and Techno-economic Approach</title>
<link href="https://hdl.handle.net/1721.1/157240" rel="alternate"/>
<author>
<name>Blackford, Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/157240</id>
<updated>2024-10-10T04:10:06Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Quantifying Emissions and Costs of Geologic Hydrogen: An Integrated Lifecycle Emissions and Techno-economic Approach
Blackford, Timothy
In the pursuit of sustainable energy solutions, this thesis explores the lifecycle emissions and economic feasibility of geologic hydrogen production. This research extends Brandt's 2023 study of 'prospective' lifecycle assessment (LCA), enhancing the underlying open-source LCA model used in this work and adding a preliminary techno-economic analysis (TEA). The findings demonstrate that geologic hydrogen developments should have emissions intensities that compare favourably to all other hydrogen production pathways. The value of lifetime emissions intensity for Brandt’s Baseline case is estimated at 0.40 kgCO2e/kgH2, representing an increase of ~6% over Brandt’s estimation. The study also highlights the potential for geologic hydrogen to achieve competitive levelized costs (estimated at $1.45/kg), making it a promising candidate in the hydrogen economy. It finds that to achieve the best possible emissions and economic results, proponents of geologic hydrogen developments should seek to maximise the productivity of each well. It also studies the impact of the United States regime of production tax credits for hydrogen, finding that the fivefold increase in the magnitude of credits for meeting employment conditions is generally more impactful than lowering emissions intensity. The thesis underscores the importance of continued refinement of LCA and TEA models to understand geologic hydrogen resources better and ensure they are developed appropriately.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Shipyard to Sea: A Flexible System Design Approach&#13;
to the Transition from Shipbuilding to Operations A Case Study Using the United States Coast Guard Offshore Patrol Cutter Program</title>
<link href="https://hdl.handle.net/1721.1/157239" rel="alternate"/>
<author>
<name>Kime, Jeremy A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157239</id>
<updated>2024-10-10T03:49:53Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Shipyard to Sea: A Flexible System Design Approach&#13;
to the Transition from Shipbuilding to Operations A Case Study Using the United States Coast Guard Offshore Patrol Cutter Program
Kime, Jeremy A.
The United States Coast Guard faces significant challenges transitioning new ships from shipbuilding to operations. Historically the low volume and irregular pace of major ship deliveries, combined with diverse homeporting factors, have resulted in anomalous post-delivery requirements. Today, a growing fleet, personnel shortages, and sweeping technological advancements are amplifying the complexity of post-delivery activities. At the same time, the Coast Guard is engaged in its largest shipbuilding effort since World War II, with seven acquisition programs scheduled to deliver 134 new ships over the next 15 years. In light of these factors the current approach, which places significant strain on crews, escalates costs, and delays operational use of the Coast Guard’s newest assets, warrants thorough examination. This thesis examines the issue through case study analyses using the Offshore Patrol Cutter (OPC) Program. The Coast Guard’s challenges are driven by three primary factors: the inherent uncertainty in ship construction, sociotechnical system dynamics associated with organizational management of pre-commissioning crews, and the ongoing evolution of technology. To address these challenges, this analysis employs an integrated approach, synthesizing principles and techniques from Architecting Innovative Enterprise Strategy (ARIES), Flexible Engineering Design (FED), and System Design and Management (SDM). This systems thinking approach aims to develop opportunities to reduce costs, improve schedules, and optimize workforce outcomes. The analysis recommends a three-phased strategy that could yield cost savings on the order of $400 million over the OPC Program’s lifespan, significantly mitigate risks associated with unforeseen shipbuilding developments, and enhance organizational outcomes regarding workforce, operational availability, and life cycle sustainment. The staffing of pre-commissioning crews is pinpointed as a pivotal discretionary event that triggers an exponential increase in system complexity and a surge in scope by introducing interdependent yet organizationally disparate requirements. Consequently, major personnel activities are decoupled from highly variable ship construction milestones. This paves the way for a paradigm shift from fixed to flexible approaches, replacing fragmented, ad hoc approaches with a flexible system architecture capable of continuous enterprise learning and improvement. Dynamic post-delivery activities are reimagined as a continuous business line, to professionalize the transition of new ships from shipbuilding to operations.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Intangible Reverberations Following Mergers &amp; Acquisitions</title>
<link href="https://hdl.handle.net/1721.1/157238" rel="alternate"/>
<author>
<name>Warren, Laura N.</name>
</author>
<id>https://hdl.handle.net/1721.1/157238</id>
<updated>2024-10-10T03:05:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Intangible Reverberations Following Mergers &amp; Acquisitions
Warren, Laura N.
This study preliminarily investigates how merger and acquisition (M&amp;A) activities affect employees as stakeholders of the company system - specifically in the areas of leadership, communication, company direction, project autonomy, and path for career growth.&#13;
Interviews of 14 employees supporting the oil and gas industry were conducted to determine the effect (if any) that M&amp;A activities had on their careers and any similarities in their experiences. This data was evaluated against research completed by Steigenberger &amp; Mirc and Schweizer &amp; Patzelt.&#13;
While the hypotheses presented cannot be proven, recommendations for future research are provided to gain and evaluate additional information.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Experimental Evaluation of Underwater Semantic SLAM</title>
<link href="https://hdl.handle.net/1721.1/157234" rel="alternate"/>
<author>
<name>Song, Thomas Jeongho</name>
</author>
<id>https://hdl.handle.net/1721.1/157234</id>
<updated>2024-10-10T03:07:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Experimental Evaluation of Underwater Semantic SLAM
Song, Thomas Jeongho
Autonomy is crucial for underwater vehicles due to the challenging and inaccessible nature of underwater environments. These environments pose significant difficulties for human-operated systems because of limited visibility, high pressure, and vast areas that are costly and risky to explore manually. Implementing autonomy in underwater vehicles presents unique challenges due to the marine environment's harsh and complex nature. Underwater communication is severely limited as water absorbs and scatters most electromagnetic signals used in terrestrial communications. This necessitates the use of acoustic communication, which has a lower bandwidth and is prone to delays and signal distortion. Similarly, GPS signals do not penetrate water, complicating navigation and creating dependence on inertial and sonar sensors, which suffer from noisy measurements that are guaranteed to drift over time. The unpredictable dynamics of underwater environments, including varying currents, lighting conditions and obstacles, further complicate autonomous navigation. As such, data collection while moving through a preplanned course is the traditional mission of the Autonomous Underwater Vehicle (AUV), defining the limitation of current technology. Higher-level missions such as search, surveillance, maintenance and manipulation require greater situational awareness, decision-making and navigation abilities, facilitated by processing semantic visual information and applying it to map generation and localization. To address the limited autonomy of current AUVs and enhance their capability for complex missions, this thesis presents the development and evaluation of a real-time, monocular visual-inertial semantic Simultaneous Localization and Mapping (SLAM) system for underwater environments, implemented on the cost-effective BlueROV2 platform. The research aims to enhance AUV autonomy and enable complex underwater missions through improved navigation and semantic mapping capabilities. Key contributions include the integration of a custom-trained object detector for underwater environments, adaptation of a hybrid SLAM algorithm combining Gaussian and Non-Gaussian landmarks for underwater operation, preliminary assessment of the SLAM system's accuracy using motion capture-based ground truth measurements, and comparative evaluation of the developed semantic SLAM system against state-of-the-art alternatives in an indoor pool experiment using the BlueROV2. This work addresses the challenges of underwater navigation and semantic mapping, offering a potential solution to extend the operational capabilities and mission complexity of affordable AUV platforms.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CAD-Based Geometry Representations for Monte Carlo Fusion Neutronics Methods and CSG vs. DAGMC Performance Tradeoffs in OpenMC</title>
<link href="https://hdl.handle.net/1721.1/157233" rel="alternate"/>
<author>
<name>Du, Katelin</name>
</author>
<id>https://hdl.handle.net/1721.1/157233</id>
<updated>2024-10-10T03:02:13Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">CAD-Based Geometry Representations for Monte Carlo Fusion Neutronics Methods and CSG vs. DAGMC Performance Tradeoffs in OpenMC
Du, Katelin
Fusion reactors utilizing deuterium and tritium fuel produce high-energy 14.1 MeV neutrons, necessitating a thorough understanding of their behavior for effective reactor design. Neutron transport codes play a critical role in determining key parameters such as tritium breeding ratio, neutron wall loading, and heat deposition, vital for assessing operational considerations. Monte Carlo (MC) radiation transport methods have become standard in fusion neutronics due to their ability to handle energy and angular variables continuously. However, manual modeling of complex fusion geometries with traditional constructive solid geometry (CSG) methods remains labor-intensive, prompting the integration of computer-aided design (CAD) models into MC radiation transport. This thesis investigates the integration of CAD-based geometry representations into MC radiation transport, focusing on computational performance implications of the Direct Accelerated Geometry Monte Carlo (DAGMC) approach. This work examines different neutronics model representations, including CSG, Unstructured Mesh (UM), and DAGMC for the practical solutions they can provide for fusion neutronics needs. Tracking algorithms associated with each representation are explored, highlighting UM and DAGMC’s versatility in the way they integrate with CAD-based design processes. Performance comparison between CSG and DAGMC geometries in OpenMC is analyzed by evaluating particle simulation rates and memory usage across four progressively complex fusion-like models. Performance results reflect positively on DAGMC transport, but areas of future work are identified for more comprehensive results. From the lens of computational performance, this study contributes to determining the viability of CAD-based geometry representations for use in fusion-relevant MC radiation transport.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility of Vector Instruction-Set Semantics Using Abstract Monads</title>
<link href="https://hdl.handle.net/1721.1/157232" rel="alternate"/>
<author>
<name>De Belen, Arthur Reiner</name>
</author>
<id>https://hdl.handle.net/1721.1/157232</id>
<updated>2024-10-10T03:53:10Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Feasibility of Vector Instruction-Set Semantics Using Abstract Monads
De Belen, Arthur Reiner
Formalizations of instruction-set semantics help establish formal proofs of correctness of both hardware designed to implement these instruction sets and the software implemented against this specification. One such prior work1 formalizes a specification of a subset of the RISC-V instruction-set architecture using a general-purpose language, Haskell, using its monad and typeclass support to abstract over effects. Another member of the same family is the RISC-V V extension, which specifies instructions for operating on multiple data elements in a single instruction, which is useful for domains with high levels of data parallelism, such as graphics rendering and machine learning. In this work I examine the question of whether the same prior work can be extended to formalize the semantics of the vector extension. I answer this question with a tentative “yes”, backed by a partial specification in Haskell of a small but nontrivial subset of this vector extension, a translation of the same specification into Coq using hs-to-coq², and work towards demonstrating the utility of this specification.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Labeling Schemes for Improving Cilksan Performance</title>
<link href="https://hdl.handle.net/1721.1/157231" rel="alternate"/>
<author>
<name>Holla, Satya</name>
</author>
<id>https://hdl.handle.net/1721.1/157231</id>
<updated>2024-10-10T03:21:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Labeling Schemes for Improving Cilksan Performance
Holla, Satya
While race detection algorithms like SP-bags have provably good theoretical properties, large overheads exist in practice, which urges the need for performance optimization. In this thesis, I propose labeling schemes as a method of circumventing many of the expensive operations in Cilksan, an implementation of the SP-bags algorithm. The proposed labeling schemes give strands of a parallel program labels during the execution of Cilksan, allowing Cilksan to shortcut the processing of certain memory accesses if the label comparison allows. I describe and prove correctness for two labeling schemes, the procedure labeling scheme and the prefix labeling scheme, implement both in Cilksan, and measure their performance. While the results show that the overhead of maintaining labels is too high in my implementation, the labeling schemes manage to circumvent many of the memory access operations, suggesting the merit of a more performant implementation of the same schemes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty-Inclusive Contrastive Learning for Leveraging&#13;
Synthetic Images</title>
<link href="https://hdl.handle.net/1721.1/157230" rel="alternate"/>
<author>
<name>Cai, Fiona X.</name>
</author>
<id>https://hdl.handle.net/1721.1/157230</id>
<updated>2024-10-10T03:33:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Uncertainty-Inclusive Contrastive Learning for Leveraging&#13;
Synthetic Images
Cai, Fiona X.
Recent advancements in text-to-image generation models have sparked a growing interest in using synthesized training data to improve few-shot learning performance. Prevailing approaches treat all generated data as uniformly important, neglecting the fact that the quality of generated images varies across different domains, datasets, and methods of generation. Using poor-quality images can hurt learning performance. In this work, we present Uncertaininclusive Contrastive Learning (UniCon), a novel contrastive loss function that incorporates uncertainty weights for synthetic images during training. Extending the framework of supervised contrastive learning, we add a learned hyperparameter that weights the synthetic input images per class, adjusting the influence of synthetic images during the training process. We evaluate the effectiveness of UniCon-learned representations against traditional supervised contrastive learning, both with and without synthetic images. Across three different finegrained classification datasets, we find that the learned representation space generated by the UniCon loss function leads to significantly improved downstream classification performance in comparison to supervised contrastive learning baselines.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Remote Sensing-Derived Normal Difference Vegetation Index to Predict Coastal Protection by Spartina alterniflora</title>
<link href="https://hdl.handle.net/1721.1/157228" rel="alternate"/>
<author>
<name>Garber, Samantha C.</name>
</author>
<id>https://hdl.handle.net/1721.1/157228</id>
<updated>2024-10-10T03:23:20Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Analyzing Remote Sensing-Derived Normal Difference Vegetation Index to Predict Coastal Protection by Spartina alterniflora
Garber, Samantha C.
Coastal vegetation can provide protection to the coastline through its root structures, which reduce soil erosion, and its stem structures, which dissipate wave energy. The drag a plant induces could be used to quantify the amount of coastal protection that is provided. This study combined field measurements and drone surveys to develop methods for quantifying vegetation drag, focusing on Spartina alterniflora (S. alterniflora), a smooth cordgrass native to the study site: Waquoit Bay National Estuarine Research Reserve. The drag of a single plant is proportional to frontal area. The drag per bed area is proportional to the drag of a single plant and the number of stems per bed area. This study collected plant samples over the growing season to generate allometric relationships between tiller height and individual plant biomass and frontal area, which provides a way to translate remotely-sensed measures of biomass into stem count and frontal area per bed area. The frontal area was measured through digital imaging of individual plants. The elastic modulus of the stem was also measured using an Instron testing machine. For sixteen 1m x 1m test plots, Normalized Difference Vegetation Index (NDVI) extracted from drone multispectral imagery was compared to measured stem count and estimated biomass. The study compared two different years and three time points within a growing season [August 2022; June, August, October 2023]. In addition, at three plots the stem count was manually altered by cutting out 50% and 100% of the plants. This study found that while NDVI can be used to determine the abundance of S. alterniflora, there are several limitations that cause the correlations to be case-specific. Limitations to NDVI-S. alterniflora correlations included: (1) saturation, (2) species inhomogeneity of the area tested, (3) shoot density inhomogeneity of the area tested, and (4) environmental conditions.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat Pipes for the Thermal Management of High&#13;
Frequency Transformers in the Navy integrated Power&#13;
Electronics Building Block</title>
<link href="https://hdl.handle.net/1721.1/157227" rel="alternate"/>
<author>
<name>Hernandez, David</name>
</author>
<id>https://hdl.handle.net/1721.1/157227</id>
<updated>2024-10-10T03:02:08Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Heat Pipes for the Thermal Management of High&#13;
Frequency Transformers in the Navy integrated Power&#13;
Electronics Building Block
Hernandez, David
The development of the integrated Power Electronics Building Block (iPEBB) is key to the full electrification of future United States Navy ships. The creation of this modular, universal power converter takes full advantage of modern electronics; however, the high heat generation of these components, 9.6 kW from the MOSFET switches and 624 W from the transformer, makes thermal management crucial to their successful implementation. As a result of additional requirements, indirect liquid cooling using a detached cold plate is being studied; however, preliminary analysis revealed concerns regarding the hot spot temperatures of the transformer using this approach. This thesis explored the feasibility of using heat pipes to supplement the cooling provided by the cold plate to maintain iPEBB transformer core and coil temperatures below 100°C and 155°C respectively. First, experiments and analytical solutions were used to provide accurate estimates for the thermal conductivity values of the 3F36 ferrite and litz wire in the transformer. Then, a standalone thermal model of the transformer was built in StarCCM+ and used to test various cooling solutions, including forced airflow and heat pipe configurations. The proposed design utilized 16 copper-water heat pipes configured to provide alternative paths of heat flow for the regions of the transformer furthest from the cold plate. Shapal HiM Soft Machinable AlN ceramic was utilized to provide high voltage insulation, and electromagnetic simulations were used to estimate the induced losses in the heat pipes as a result of high frequency coil operations. Using a half-iPEBB thermal model, the final configuration, coupled with the cold plate cooled by 22°C deionized water at a flow rate of 0.37 kg/s, achieved a core maximum temperature of 99.7°C, coil maximum of 93.2°C, and MOSFET maximum of 144.6°C, all within their respective limits, while only adding a net weight of 0.29 kg to the iPEBB. The thermal results of this study showcase the effectiveness of heat pipes in the iPEBB and invite further analysis and experimentation to validate the electromagnetic implications of the concept. These results also contribute to the general ongoing study of heat pipe usage near high-frequency electronics.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploiting irregular parallelism to accelerate FPGA routing</title>
<link href="https://hdl.handle.net/1721.1/157224" rel="alternate"/>
<author>
<name>Zhu, Alan Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157224</id>
<updated>2024-10-10T04:12:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Exploiting irregular parallelism to accelerate FPGA routing
Zhu, Alan Y.
In the era of hardware specialization, field-programmable gate arrays (FPGAs) provide a promising platform for computer architects, combining the programmability of software with the speed and performance of hardware. Despite this, compiling hardware programs onto FPGAs can be incredibly time-consuming, making it hard to develop and iterate on complex FPGA programs. Of particular relevance is the routing phase, which takes a circuit’s technology-mapped netlist and routes its signals using the switches and wires present on a given FPGA architecture, often with a target of minimizing critical path delay. This optimization problem is known to be NP-hard, and existing algorithms for approximating it exhibit very little regular parallelism.&#13;
This thesis accelerates the routing phase of VTR 8.0, a commonly used, open-source research tool for FPGA CAD flow. We show that despite the lack of regular parallelism, routing still exhibits significant irregular parallelism. This parallelism can be exploited on parallel architectures that provide hardware support for ordered tasks and fine-grained speculation, such as the Swarm architecture. Using Swarm, we exploit the parallelism present at the core of VTR’s algorithm, achieving a 35.9x speedup on a single routing iteration of a large benchmark (cholesky_mc) on 256 cores.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calculation of Zakat on Financial Assets for American Muslims: A Financial and Jurisprudential Approach</title>
<link href="https://hdl.handle.net/1721.1/157223" rel="alternate"/>
<author>
<name>Arsalan, Naveed</name>
</author>
<id>https://hdl.handle.net/1721.1/157223</id>
<updated>2024-10-10T03:46:14Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Calculation of Zakat on Financial Assets for American Muslims: A Financial and Jurisprudential Approach
Arsalan, Naveed
This thesis presents a comprehensive framework for calculating Zakat on modern financial assets specifically tailored for American Muslims. As one of the five pillars of Islam, Zakat is an obligatory form of charity for those who meet specific wealth criteria. However, applying traditional Zakat principles to contemporary financial instruments poses significant challenges, particularly within the context of the U.S. financial system.&#13;
&#13;
The research addresses these complexities by developing methodologies that consider diverse financial instruments, valuation challenges, tax implications, accessibility issues, and Shariah compliance. The framework covers a wide range of assets, including cash and bank accounts, stocks, mutual funds, bonds, cryptocurrencies, retirement accounts (401(k)s, Traditional and Roth IRAs), Health Savings Accounts (HSAs), employee stock options, precious metals and jewelry, and real estate investments.&#13;
&#13;
Bridging classical Islamic jurisprudence with modern financial realities, this thesis provides detailed calculation methodologies for each asset class, incorporating U.S.-specific considerations such as tax-deferred accounts and capital gains implications. The framework is designed to be adaptable to evolving financial markets and balances various scholarly opinions on contentious issues. To enhance accessibility, both comprehensive and simplified calculation methods are offered, catering to users with different levels of financial literacy.&#13;
&#13;
In conclusion, this thesis makes a significant contribution to Islamic finance by offering a structured, principle-based approach to Zakat calculation that is both Shariah-compliant and applicable in the modern American financial context. It provides a valuable resource for American Muslims striving to fulfill their religious obligations amidst the complexities of the U.S. financial system and lays the groundwork for future research in Islamic finance in Western contexts.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Uncertainty Quantification in Deep Learning Models of&#13;
G-Computation for Outcome Prediction under Dynamic&#13;
Treatment Regimes</title>
<link href="https://hdl.handle.net/1721.1/157222" rel="alternate"/>
<author>
<name>Deng, Leon</name>
</author>
<id>https://hdl.handle.net/1721.1/157222</id>
<updated>2024-10-10T03:41:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Uncertainty Quantification in Deep Learning Models of&#13;
G-Computation for Outcome Prediction under Dynamic&#13;
Treatment Regimes
Deng, Leon
G-Net is a neural network framework that implements g-computation, a causal inference method for making counterfactual predictions and estimating treatment effects under dynamic and time-varying treatment regimes. Two G-Net models have been successfully implemented: one that uses recurrent neural networks (RNNs) as its predictors, and one that uses transformer encoders (G-Transformer). However, one limitation of G-Net is that its counterfactual predictive density estimates do not take into account uncertainty about model parameter estimates. These uncertainty estimates are necessary for establishing confidence intervals around the effect estimation, enabling a robust assessment of whether the effects of two treatment options exhibit statistically significant differences. An important area of work is adding support for quantification of model uncertainty for conditional effect estimation. This thesis aims to add uncertainty quantification to both the RNN-based G-Net and the G-Transformer. To achieve this, we use two well-known techniques in uncertainty modeling, namely variational dropout and deep ensembling. We evaluate our methods using two simulated datasets based on mechanistic models. We demonstrate that G-Net and G-Transformer models with uncertainty quantification are better-calibrated and perform better for individual-level clinical decision making than their baseline counterparts.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>HYPERION: A HYdrogen PERmeatION Experiment to Quantify Hydrogen Transport in Fusion-Relevant Molten Salts</title>
<link href="https://hdl.handle.net/1721.1/157221" rel="alternate"/>
<author>
<name>Cota, Jaron F.</name>
</author>
<id>https://hdl.handle.net/1721.1/157221</id>
<updated>2024-10-10T03:01:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">HYPERION: A HYdrogen PERmeatION Experiment to Quantify Hydrogen Transport in Fusion-Relevant Molten Salts
Cota, Jaron F.
The measurement of hydrogen transport properties of molten salts like FLiBe is crucial for the development of advanced nuclear technologies like lithium-bearing liquid immersion breeding blankets for fusion reactors. Tritium production and the quantification of its mobility in these materials is necessary for efficient operation of these technologies. A common method of measuring these properties is with hydrogen permeation experiments. Hydrogen permeation experiments involve measuring the flux of hydrogen permeating through a substance, and from this flux transport properties like the diffusivity and solubility of hydrogen in the molten salt can be derived with various models of the experimental setup. This thesis describes the process of fabricating and assembling a HYdrogen PERmeatION (HYPERION) experiment and provides preliminary results of the functionality as well as some issues and troubleshooting of the experiment. Using the code Finite Element Simulation of Tritium In Materials (FESTIM), the experiment was also modeled. The models were used to explore the design parameter space of the experiment to determine the experiment’s effectiveness in producing the desired result of accurately calculating the hydrogen transport properties of the molten salt. Through the process of modeling, the assumptions that were normally made when performing these experiments were called into question and their validity was quantified, suggesting that the experiments that have been previously conducted might have been significantly affected by these assumptions. Using these models could eventually improve the accuracy of measured transport properties for molten salts like FLiBe and other nuclear fusion-relevant molten salts and inform the design of hydrogen permeation experiments moving forward.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Natural Language Interface for Prescriptive AI Solutions&#13;
in Enterprise</title>
<link href="https://hdl.handle.net/1721.1/157220" rel="alternate"/>
<author>
<name>Orderique, Piero</name>
</author>
<id>https://hdl.handle.net/1721.1/157220</id>
<updated>2024-10-10T03:58:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Natural Language Interface for Prescriptive AI Solutions&#13;
in Enterprise
Orderique, Piero
Despite advancements in causal inference and prescriptive AI, its adoption in enterprise settings remains hindered primarily due to its complexity and lack of interpretability. This work at the MIT-IBM Watson AI Lab focuses on extending upon the proof-of-concept agent, PrecAIse, by designing a domain-adaptable conversational agent equipped with a suite of causal and prescriptive tools. The objective is to make advanced, novel causal inference and prescriptive tools widely accessible through natural language interactions. The presented Natural Language User Interface (NLUI) enables users with limited expertise in machine learning and data science to harness prescriptive analytics in their decision-making processes without requiring intensive compute. We present an agent capable of function calling, maintaining faithful, interactive, and dynamic conversations, and supporting new domains.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geo-UNet: A Geometrically Constrained Neural Framework for Clinical-Grade Lumen Segmentation in Intravascular Ultrasound</title>
<link href="https://hdl.handle.net/1721.1/157219" rel="alternate"/>
<author>
<name>Chen, Yiming</name>
</author>
<id>https://hdl.handle.net/1721.1/157219</id>
<updated>2024-10-10T04:09:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Geo-UNet: A Geometrically Constrained Neural Framework for Clinical-Grade Lumen Segmentation in Intravascular Ultrasound
Chen, Yiming
Precisely estimating lumen boundaries in intravascular ultrasound (IVUS) is needed for sizing interventional stents to treat deep vein thrombosis (DVT). Unfortunately, current segmentation networks like the UNet lack the precision required for clinical adoption in IVUS workflows. This arises due to the difficulty of automatically learning accurate lumen contour from limited training data while accounting for the radial geometry of IVUS imaging. We propose the Geo-UNet framework to address these issues via a design informed by the geometry of the lumen contour segmentation task, building anatomical constraints directly into the architecture. We first convert the input data and segmentation targets from Cartesian to polar coordinates. Starting from a convUNet feature extractor, we propose a two-task setup, one for conventional pixel-wise labeling and the other for single boundary lumen-contour localization. We directly combine the two predictions by passing the predicted lumen contour through a new activation (named CDFeLU) to filter out spurious pixel-wise predictions. Our unified loss function carefully balances area-based, distance-based, and contour-based penalties to provide near clinical-grade generalization in unseen patient data. We also introduce a lightweight, inference-time technique to enhance segmentation smoothness. The efficacy of our framework on a venous IVUS dataset is shown against state-of-the-art models. We will make the code repository for this project available soon after approval from industry collaborators.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of Machine Learning-Based Methods for Narrowband Blind Adaptive Beamforming</title>
<link href="https://hdl.handle.net/1721.1/157218" rel="alternate"/>
<author>
<name>Shonkwiler, Lara</name>
</author>
<id>https://hdl.handle.net/1721.1/157218</id>
<updated>2024-10-10T03:35:06Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Comparison of Machine Learning-Based Methods for Narrowband Blind Adaptive Beamforming
Shonkwiler, Lara
There are many different approaches to beamforming and interferer cancellation. The earliest methods of beamforming assumed prior knowledge of the receive array geometry and of the incoming signal directions. This information is normally found via array calibration. Blind source separation methods do not require this information and therefore are more robust to array calibration errors. Traditional blind source separation methods generally leverage some intrinsic characteristic of the signal, such as constant envelope properties or second or higher order statistics. Traditional blind source separation methods such as CMA, SOBI, JADE, and FastICA tend to be highly effective at beamforming datasets with moderate to large sample supports, but they do not perform well when they only have access to a limited number of data samples. They also bear the disadvantage that the appropriate algorithm must be selected based on the properties of the expected signal. Machine learningbased methods are of interest because they show promise in low sample support regimes, and because they offer the possibility of a ‘one size fits all’ solution that can adaptively recognize and exploit different signal features. This thesis describes the performance of two machine learning-informed beamforming methods — Classification-Based Transfer Learning (CBTL) [1] and Denoising-Based Transfer Learning (DBTL). CBTL and DBTL are evaluated with respect to each other and with respect to traditional blind beamforming methods across a variety of signal detection environments, and are found to offer superior or equivalent performance in a majority of environments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Speech Motor Pattern in Minimally&#13;
Verbal Adults with Autism Spectrum Disorder via&#13;
Surface Electromyography</title>
<link href="https://hdl.handle.net/1721.1/157217" rel="alternate"/>
<author>
<name>Protyasha, Nishat Fahmida</name>
</author>
<id>https://hdl.handle.net/1721.1/157217</id>
<updated>2024-10-10T03:34:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Characterizing Speech Motor Pattern in Minimally&#13;
Verbal Adults with Autism Spectrum Disorder via&#13;
Surface Electromyography
Protyasha, Nishat Fahmida
Minimally verbal adults with Autism Spectrum Disorder (mvASD) experience significant speech production challenges linked to impaired motor skills. Despite the prevalence of these speech difficulties, the underlying motor mechanisms remain poorly understood. This thesis investigates the neuromuscular activity associated with speech motor movement in mvASD using surface electromyography (sEMG). By capturing and analyzing sEMG signals with 8 electrodes from key facial muscles during speech production tasks, this study provides insights into the distinct motor patterns exhibited by mvASD individuals compared to neurotypical controls. The sEMG data was collected while 25 participants, including 10 mvASD individuals and 15 neurotypical controls performed a series of carefully designed speech tasks. Features such as Root Mean Square (RMS) values, Pearson correlation coefficients, and eigenvalues from auto and cross correlation matrices were extracted to measure muscle activation and coordination complexity. The results reveal that mvASD individuals exhibit higher RMS values and greater synchronization between sEMG channels, indicating stronger muscle activation and tighter coupling among facial muscles. Furthermore, the analysis of eigenvalues suggests lower complexity in motor coordination among mvASD participants, reflecting fewer degrees of freedom in muscle control. These findings were supported by classification models, which demonstrated that features from diadochokinetic tasks were more effective in distinguishing mvASD from neurotypical individuals.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Biometric and Biomechanical Sensing for Violin Performance Analysis</title>
<link href="https://hdl.handle.net/1721.1/157216" rel="alternate"/>
<author>
<name>Kydd, Aria</name>
</author>
<id>https://hdl.handle.net/1721.1/157216</id>
<updated>2024-10-10T03:41:18Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Biometric and Biomechanical Sensing for Violin Performance Analysis
Kydd, Aria
Expressive violin performance demands the coordination of multiple physical and physiological processes. Students, especially those engaged in infrequent private lessons, often struggle to manage these demands. Outside of lessons, they lack access to the resources and external feedback that technology has made readily available in other learning settings. In this study, we propose the Expressive Violin Performance Sensing (EVPS) system as a solution to this issue. The EVPS system uses low-cost and accessible electronic sensors to provide objective, quantitative insights into the physical and physiological aspects of a violinist’s performance. Results from experimental trials reveal that the EVPS system provides relatively reliable data on expressive violin performance. While the general measures of physicality did not reveal significant differences between players of distinct skill levels, physiological and specific physical measurements aligned well with predictions. The successful utilization of low-cost sensors in the EVPS system highlights their potential for use in future performance analysis studies, challenging the precedent of relying on expensive, medical-grade systems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitivity Analysis of Self-Loosening Behavior forMesoscale Bolt Assemblies Under Cyclic Lateral Loading</title>
<link href="https://hdl.handle.net/1721.1/157214" rel="alternate"/>
<author>
<name>Martinez, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/157214</id>
<updated>2024-10-10T04:13:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Sensitivity Analysis of Self-Loosening Behavior forMesoscale Bolt Assemblies Under Cyclic Lateral Loading
Martinez, Alejandro
This study aims to enhance the understanding of self-loosening in mesoscale bolt assemblies, specifically those with characteristic dimensions ranging from 100 to 3,000 micrometers. These bolts pose unique design challenges due to the small difference between their nominal dimensions and manufacturing tolerances. This work discusses the design of new instrumentation to test multimesoscale bolt assemblies under various loading conditions, an area previously focused only on larger bolts. A case study was conducted in collaboration with a mesoscale multi-bolt system that was experiencing self-loosening failures. This system was tested to determine its susceptibility to the self-loosening failure mode. An experimental study was conducted to identify the sensitivities of the system to geometric and loading environment parameters. A set of hypotheses were proposed as a way to facilitate new learnings about the system’s sensitivities to four different parameters. The findings from the experimental study provide valuable insights into how different geometric configurations and types of loading conditions contribute to the performance of mesoscale multi-bolted systems. Through these investigative efforts, the study successfully identified the existence of a critical displacement threshold for self-loosening in mesoscale multi-bolted systems that is sensitive to factors such as clamp length, amplitude of input displacement load, and screw position.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Development of an Accelerated Material Synthesis&#13;
Platform for Automated Materials Research</title>
<link href="https://hdl.handle.net/1721.1/157213" rel="alternate"/>
<author>
<name>Aissi, Eunice I.</name>
</author>
<id>https://hdl.handle.net/1721.1/157213</id>
<updated>2024-10-10T03:50:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Design and Development of an Accelerated Material Synthesis&#13;
Platform for Automated Materials Research
Aissi, Eunice I.
Materials development is the foundation for innovation in many industries and fields, however, this process is traditionally slow and resource-intensive. Most often, new materials are developed and characterized on the time scale of years which can limit the pace of scientific and industry innovation. I address the material synthesis and characterization bottleneck by presenting a framework that I believe is suitable for smaller labs: Self-built, low-cost automation. The design philosophy is to de-risk the lab automation process by keeping costs low, failing fast, and leveraging common resources in electronic systems and additive manufacturing. I present an improved version of a low-cost but high-throughput inkjet material printer developed by Siemenn et al. and adapted to operation in the glovebox, hood, and benchtop environments. The tool is capable of depositing gradients of droplets with unique compositions at a rate of up to 1000 materials per minute, is self-built, and costs around $500. I also present a computer-vision-enabled high-throughput material characterization algorithm for stability quantification through color degradation. The synthesis and characterization methods are validated on a methylammonium lead iodide (MAPbI3) and formamidinium lead iodide (FAPbI3) perovskite material system. X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), and hyperspectral imaging measurements show equivalence between high-throughput synthesis and more traditional spin-coating methods. Results obtained through the high-throughput stability characterization method are aligned with stability trends reported in the literature and have an accuracy of 96.9% when compared to ground-truth degradation as measured by a domain expert.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>FrED Manufacturing - A Study in Affordable Manufacturing to Scale using Desktop Sized Fiber Extrusion Device</title>
<link href="https://hdl.handle.net/1721.1/157212" rel="alternate"/>
<author>
<name>Rosko, Rachael S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157212</id>
<updated>2024-10-10T03:04:21Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">FrED Manufacturing - A Study in Affordable Manufacturing to Scale using Desktop Sized Fiber Extrusion Device
Rosko, Rachael S.
FrED (Fiber Extrusion Device) Factory is a manufacturing facility at MIT which educates its students on fundamental and advanced manufacturing principals. The factory produces multiple FrED devices, which are "desktop fiber extrusion systems that mimic continuous fiber draw process for hands-on learning and/or laboratory experience on data acquisition, control system, and smart manufacturing. It allows learners to perform experiments, vary manufacturing parameters and control system, collect data, and perform analysis." [1] This year’s thesis work builds off of the progress from 2023, which aimed to produce a low cost variant of earlier versions of the FrED. In 2024, the aim for the lab was to implement design refinements, design for manufacturing, design the assembly line, design packaging, develop supply chain using Tulip, develop educational content, perform user testing, and execute pilot runs. The focus of this thesis will be on design refinements related to graphical user interface (GUI), inclusion of threading for improvement to program speed, and characterization of performance related to diameter control as well as advancements in educational content development, user testing, production level assembly, and pilot runs. The results of this thesis include significant improvements made to the FrED device such as a user-controlled GUI as well as close-loop controls. Furthermore, key components of the device were quantified such as fps rate of the USB camera and motor stability which aided in understanding how diameter control and modulation can be implemented in future work. At the time of submission, there were inherent complications still not understood about the FrED that limited its potential as an end user product. Some complications included reliability of the diameter reading from the USB camera, physics of the hot glue preform, and motor speed assumptions which did not perform well under close-loop testing (spool speed going to 0 in order to make the diameter larger consequently prevents the camera from reading any future diameter measurements which is problematic). In terms of pilot runs, user testing, and educational content development, the results were promising. 78.3% of the 23 user testing respondents at Venture Cafe said they were interested in receiving a FrED and getting access to more learning content. Suggestions were made by the users for future work and implementation. Educational content was developed for mass flow and data acquisition, however, a formal pilot run session where this could be tested for feedback was not performed.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Carbon Capture Efficiency in Natural Gas Combined Cycle Power Plants: Analyzing the Effects of Variable Load Operations</title>
<link href="https://hdl.handle.net/1721.1/157211" rel="alternate"/>
<author>
<name>Knight, Caleb M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157211</id>
<updated>2024-10-10T03:45:22Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Carbon Capture Efficiency in Natural Gas Combined Cycle Power Plants: Analyzing the Effects of Variable Load Operations
Knight, Caleb M.
Natural gas power generation retrofitted with carbon capture technology is poised to play a crucial role in ensuring energy reliability amidst the transition to variable renewable energy resources. While natural gas generation is used primarily for baseload power, it is expected to transition towards an intermittent power generator, serving as a load-following resource during periods of low renewable energy availability. It will be critical to understand how start-up, shutdown, and load-following behavior may impact system performance and influence future grid design. &#13;
&#13;
This thesis performs a comprehensive literature review to establish context on various techniques of carbon capture technology. Post-combustion carbon capture, specifically absorption-based technology, remains the preferred candidate for retrofitting natural gas plants due to its technical maturity, scalability, relatively high capture efficiencies, and ease of retrofitting. The literature highlights that absorption-based carbon capture units exhibit degraded performance during non-steady-state operating conditions. Specifically, cold start-ups result in lower capture efficiencies and higher heat rates, although hot start-ups incur significantly less performance reduction. &#13;
&#13;
The literature review findings are integrated into GenX, a grid optimization tool, to evaluate natural gas combined cycle power plants equipped with carbon capture technology. The modified optimization models are run using the ISO New England grid system, and results suggest that incorporating advanced start-up penalties for natural gas plants reduces operational flexibility in an emissions-constrained environment. As capture efficiencies decrease and heat rates increase during start-ups, utilizing natural gas plants becomes more expensive due to the additional emissions and reduced thermal efficiency. Comparing models with different levels of performance degradation during start-up suggests that installing less gas capacity could be optimal, with those units operating at higher capacity factors to mitigate start-up penalties. Under modest emissions constraints, natural gas units may be operated continuously even during periods of renewable energy surplus. Harsher start-up penalties applied to natural gas plants likely increase the incremental value of alternative energy technologies, although natural gas retains a critical role in the energy mix.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cheaper Than A Funeral: Considering Ibogaine’s Psychedelic Journey and Therapeutic Potential</title>
<link href="https://hdl.handle.net/1721.1/157210" rel="alternate"/>
<author>
<name>Daly, Noah</name>
</author>
<id>https://hdl.handle.net/1721.1/157210</id>
<updated>2024-11-14T17:03:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cheaper Than A Funeral: Considering Ibogaine’s Psychedelic Journey and Therapeutic Potential
Daly, Noah
The past decade has seen a surge of interest in psychedelic compounds as therapeutic medicine. Ibogaine, an indole alkaloid extracted exclusively from an endangered family of shrubs from Central African nations of Gabon and Cameroon, is a psychedelic currently being studied for its unique therapeutic potential. It is also considered the most extreme of the psychedelic drugs currently known to researchers. For the past fifty years, it’s been used to treat severe substance use disorders, particularly with highly addictive opioids and stimulants. In the past ten years, American special operations forces veterans have begun to take ibogaine to treat traumatic brain injuries (TBI). Anecdotal evidence has suggested that the permanent, downstream symptoms TBI patients experience after these injuries are effectively managed after a single ibogaine treatment. Advocacy from the special operations veterans community prompted Stanford University researchers to embark on the first-ever U.S.-based clinical trial of ibogaine to treat TBI. The study, published in January, 2024, further evidenced decades of evidence of ibogaine’s clinical use potential. Yet questions still remain about whether or not ibogaine’s cardiac toxicity can effectively be managed in human patients, as well as the true therapeutic utility of the prolonged period of dreamlike consciousness ibogaine produces in patients. This thesis examines the cases of three patients–all United States military veterans–undergoing ibogaine therapy, examining how the biological impacts of ibogaine, as well as their psychedelic experiences, may have saved their lives.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Thinking Approach to Hispanic Engineer’s Involvement in Corporate Diversity Networks</title>
<link href="https://hdl.handle.net/1721.1/157208" rel="alternate"/>
<author>
<name>Chambe, Enoch</name>
</author>
<id>https://hdl.handle.net/1721.1/157208</id>
<updated>2024-10-10T03:51:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Systems Thinking Approach to Hispanic Engineer’s Involvement in Corporate Diversity Networks
Chambe, Enoch
Affinity networks, also known as Employee Resource Groups (ERGs), are increasingly essential in today’s corporate world as they play a crucial role in fostering diversity, equity, and inclusion within organizations. These groups provide a platform for employees from underrepresented or marginalized communities to connect, share experiences, and find&#13;
support. ERGs geared towards Hispanic employees are often advertised as not only a means to connect with others and provide a sense of belonging but are also often promoted as avenues towards successful professional development and growth for underrepresented employees. This research explores the perspectives of a group of experienced engineers from various technical backgrounds and industries to understand if there is a correlation between generational status for Hispanic Americans and their overall perceived benefits from participating in ERGs. The study provides a detailed literature review of relevant existing research on this subject, followed by semi-structured interviews of ten participants, and a thematic analysis approach used to analyze the data into the following five themes: diversity considerations for school and job selections, employee perspective on ERGs, sense of belonging and generational differences, the meaning of inclusiveness, and continued participation. Finally, a research conclusion and a series of recommendations are provided.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Women Nobel Laureates in STEM (2000-2023): Life Stories, Challenges, and How They Achieved Impact for Success</title>
<link href="https://hdl.handle.net/1721.1/157207" rel="alternate"/>
<author>
<name>Wu, Kedi</name>
</author>
<id>https://hdl.handle.net/1721.1/157207</id>
<updated>2024-10-10T03:08:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Women Nobel Laureates in STEM (2000-2023): Life Stories, Challenges, and How They Achieved Impact for Success
Wu, Kedi
Science, Technology, Engineering, and Math (STEM) are the critical growth engines that develop the economy and society and improve our lives overall. However, women are underrepresented in STEM, which means 50% of the world's brain power is untapped. We know that, in general, women face unique barriers and challenges than men, such as gender bias and stereotypes. However, we know less about the unique obstacles and challenges women face in STEM and even less about overcoming the barriers in STEM. This research aims to identify the challenges faced by women in STEM and to gain a practical understanding of what women can do to evolve as leaders. As STEM is extremely broad, this thesis focused on studying the 11 female Nobel laureates who won the prize after 2000 under the three STEM-related Nobel categories: physics, chemistry, and medicine or physiology.&#13;
&#13;
First, a comprehensive literature review was conducted to understand the study results of existing barriers faced by women in STEM and the enablers that can increase the likelihood of women's success in STEM. Next, data were collected about the 11 Women STEM Noble laureates, including their biographies, life stories, newspaper reports, and interview transcripts. The thematic analysis was then adopted to analyze the collected data, in which four themes are identified and presented: 1) Overcome Barriers and Challenges; 2) Qualities of a Good Scientist; 3) Supportive Systems; 4) Impactful, Humanity, Innovative. Finally, the findings are summarized in relation to the research objectives to provide insights for women who want to pursue a STEM career.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanisms and Implementation of Thermo-Optical Annealing in Silica Fiber Sensors for Radiation-Induced Attenuation Mitigation</title>
<link href="https://hdl.handle.net/1721.1/157206" rel="alternate"/>
<author>
<name>Legoupil, Aurelien Y. M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157206</id>
<updated>2024-10-10T03:37:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Mechanisms and Implementation of Thermo-Optical Annealing in Silica Fiber Sensors for Radiation-Induced Attenuation Mitigation
Legoupil, Aurelien Y. M.
In the context of quench detection systems for fusion superconducting magnets, temperature sensors based on optical fibers provide an effective solution for rapid, distributed measurement, with low sensitivity to electromagnetic interference. At the cryogenic temperatures and high radiation doses associated with this application, however, optical fibers undergo radiation-induced attenuation (RIA): light-absorbing point defects form within the silica glass structure, reducing the longevity and effectiveness of these sensors. In this work, we investigate the underlying microscopic defects and mechanisms of RIA and assess strategies for mitigation, namely, annealing via heat treatment (thermal annealing) and annealing via light propagation through the fiber (optical annealing, or “photobleaching”). We design a white light absorption spectroscopy setup with in-situ irradiation and optical annealing, working at liquid nitrogen temperature and different post-irradiation warm-up rates. For the pure silica core and F-doped cladding fibers studied, the RIA spectrum obtained is decomposed into known radiation-induced defect absorption bands, highlighting the key role of self-trapped holes in RIA at telecommunication wavelengths. Furthermore, absorption spectroscopy experiments are performed to show that thermal annealing at liquid nitrogen temperature is negligible, validating the transferability of the experimental results obtained at 77 K to 20 K applications. The decomposition of RIA into different defect contributions is supported by cold post-irradiation electron paramagnetic resonance (EPR) spectroscopy of fiber preform fragments, which reveals the presence of two types of paramagnetic centers: self-trapped holes and E'_gamma centers. The post-irradiation transient grating spectroscopy (TGS) technique is adapted to glass samples with continuous cooling at liquid nitrogen temperature and in-situ optical annealing. With this technique, we could observe the changes in thermal and acoustic properties resulting from the evolution of defect populations, with the potential to complement other experimental techniques to better understand RIA build-up and annealing kinetics. To improve the modeling of thermo-optical annealing, we propose future experiments including isothermal annealing tests and a larger exploration of optical annealing parameters. Our RIA build-up and annealing tests can help companies aiming to operate optical fibers under irradiation at cryogenic temperatures optimize their heat treatments to restore fiber transmission and the prevention of RIA during operation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Energy and Area Estimation Plugin for Accelerator Architecture Simulation</title>
<link href="https://hdl.handle.net/1721.1/157205" rel="alternate"/>
<author>
<name>Wu, Wendy</name>
</author>
<id>https://hdl.handle.net/1721.1/157205</id>
<updated>2024-10-10T03:56:36Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">An Energy and Area Estimation Plugin for Accelerator Architecture Simulation
Wu, Wendy
Development of domain-specific hardware accelerators has been an important focus for high performance computing research in recent years, enabling significant gains in a variety of practical applications. Of particular interest is accelerator design for applications involving sparse data. Such accelerators inherently tend towards a diverse array of architecture designs, and often rely on custom simulators for evaluation. In addition to raw performance, energy consumption and chip area are both important considerations for evaluating accelerators. Accelergy is a tool that provides a good general framework for fine-grained energy and area estimation. However, output from simulation tools may not be compatible with Accelergy’s expected input format, which is the case for the custom simulator Accelsim. To address this gap, this work presents a streamlined plugin for processing Accelsim simulator output into Accelergy input, for the purpose of generating accurate and explainable energy consumption and area models for accelerator architectures. We demonstrate the plugin’s flexibility by performing energy and area estimates for two state-of-the-art hardware accelerators, ISOSceles and Trapezoid. Overall, this plugin is easy-to-use, self-contained, and supports a wide variety of configurable functionalities, making it an excellent general tool for running Accelergy on Accelsim output.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recovery of Herschel-Bulkley Fluid Parameters from&#13;
Video via Differentiable Simulations</title>
<link href="https://hdl.handle.net/1721.1/157204" rel="alternate"/>
<author>
<name>Eastman, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157204</id>
<updated>2024-10-10T03:04:35Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Recovery of Herschel-Bulkley Fluid Parameters from&#13;
Video via Differentiable Simulations
Eastman, John M.
Recreating the physical behavior of fluids from real-world footage remains a significant challenge, particularly for non-Newtonian fluids. This work introduces a novel method that combines neural radiance fields (NeRF), which map 3D scene coordinates to color and density using deep neural networks, with the material point method (MPM), a simulation technique that represents materials as moving points capable of large deformation. Our approach aims to accurately recover physical parameters and achieve high-fidelity 3D reconstructions from single-view videos of fluids, even those with complex rheological behaviors like shear thinning and thickening. In this study, we apply our method to a Herschel-Bulkley fluid, namely ketchup, under two different real-world conditions: a 50mm column collapse and being squeezed from a bottle. By leveraging the differentiable nature of NeRF and the fluid simulation capabilities of MPM, our approach extracts parameters from real-world footage after initially training on approximate geometry derived from virtual models. The actual video footage is then used to estimate initial velocities and retrieve constitutive parameters, including modulus, yield stress, and viscosity. The iterative optimization process, which integrates continuous feedback between the NeRF-MPM simulation and the video data, enables us to extract constitutive parameters from real footage and perform predictive simulations that closely reflect the behavior observed in the training videos. Key results include the retrieval of constitutive parameters, such as modulus, yield stress, and viscosity, as well as reconstructed videos that reflect the fluid behavior observed in the training video. The results demonstrate that our method can reconstruct the fluid’s flow behavior from limited perspectives, accurately enough to visually reproduce the flow, showcasing its flexibility and robustness. This work not only validates the approach through 3 a series of experiments but also highlights the potential for differentiable rendering and simulation techniques to advance our understanding and simulation of complex material dynamics, particularly in cases where direct measurements are challenging or impossible.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Adaptive Parsing to Integrate Dialogue Scripts in Game&#13;
Development</title>
<link href="https://hdl.handle.net/1721.1/157203" rel="alternate"/>
<author>
<name>Taylor, Temi</name>
</author>
<id>https://hdl.handle.net/1721.1/157203</id>
<updated>2024-10-10T03:32:48Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Using Adaptive Parsing to Integrate Dialogue Scripts in Game&#13;
Development
Taylor, Temi
For people without programming experience, integrating their work into the main project forms a common bottleneck in video game development. Particularly for dialogue writing, existing approaches for moving the text into the codebase are either highly tedious or excessively heavyweight for faster paced projects. Given that writers often initially produce loosely-formatted scripts, this thesis describes Game-DAP, an adaptive parsing system that accounts for the variation in individual dialogue writing styles. Examinations of pre-existing systems and a survey conducted on developers form a basis for a syntactic model of the information commonly encapsulated by dialogue scripts. This model lends itself to a design for the parsing process used by Game-DAP, which aims to provide as much flexibility to writers as possible with those assumptions as a baseline. User testing results informed the evaluation of the system, focusing on its accuracy, flexibility, and accessibility from the perspective of various authors. Although this analysis revealed several classes of inputs that Game-DAP struggles to process with full correctness, the more successful cases and instances of positive feedback suggest that a refined approach to this kind of domain-specific parsing could provide great value in the creative writing process of game dialogue.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Optoelectronic Properties of Twisted and&#13;
Intercalated Niobium Oxide Dihalides</title>
<link href="https://hdl.handle.net/1721.1/157202" rel="alternate"/>
<author>
<name>Luo, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/157202</id>
<updated>2024-10-10T03:54:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Exploring Optoelectronic Properties of Twisted and&#13;
Intercalated Niobium Oxide Dihalides
Luo, Ashley
2D materials, or layers of one-atom thick crystalline solids, offer a flexible solution for a variety of applications that require certain characteristics. As a result of modifications in physical and chemical design involving 2D materials such as stacking, twisting and ion intercalation, properties such as electrical conductivity, spin diffusion length, thermal conductivity, and mechanical strength observe more degrees of freedom than in their bulk material counterpart. Currently, small optical systems comprise of passive devices that are rigid in their light pathing design and require modulators to control light post-fabrication for use. These systems are confined by the material used to fabricate the device and their associated effective indices, which are determined pre-fabrication by the ultimate desired optical effect. However, 2D materials can exhibit tunable band structures that yield the optimal optical response, even post-fabrication. This thesis will discuss the properties of mechanically and chemically manipulated niobium oxydichloride (NbOCl₂) and niobium oxydiiodide (NbOI₂) ultrathin structures that have the potential to integrate into flexible optical systems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Elastic Resistive Force Theory &amp; Applications to Uprooting</title>
<link href="https://hdl.handle.net/1721.1/157201" rel="alternate"/>
<author>
<name>Yilmaz, Lale</name>
</author>
<id>https://hdl.handle.net/1721.1/157201</id>
<updated>2024-10-10T03:04:03Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development of Elastic Resistive Force Theory &amp; Applications to Uprooting
Yilmaz, Lale
Granular intrusion processes such as sand locomotion, uprooting, and digging are commonly present. While these phenomena can be accurately modeled via discrete element methods and continuum models, this accuracy comes at a great computational cost, especially for large systems. Granular Resistive Force Theory (RFT) is a reduced-order, rateindependent model that has been shown to successfully capture the motion of rigid intruders in granular media, with a reduced computational cost. RFT is based on a rate-independent theory that calculates the force experienced by a body using its direction of velocity. This makes it difficult to handle scenarios that are near-stagnant which occur frequently in uprooting of plants. To overcome this limitation, we introduce elastic RFT (eRFT) which is based on a rate-independent plasticity flow-rule–like criterion, and pair it with deformable intruders. We focus on modeling uprooting processes which inherently have flexible intruders and are often dynamically controlled. This allows us to address both previously mentioned shortcomings of RFT (stagnancy and flexible intruders) at once. By combining eRFT with a nonlinear beam theory to represent slender, inextensible roots we create a speedy computational tool. Using MATLAB, we simulate various uprooting scenarios to better understand anchoring mechanisms of different root geometries. We showcase the validity of eRFT results by comparing them to experimental data. To implement eRFT in ABAQUS, we make use of an existing user subroutine which allows the study of a broader range of intruder materials and shapes. While the subroutine has its limitations, initial comparisons to computational and experimental results are demonstrative.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of the US Capitol Attack on political views in Argentina, Brazil, and Chile</title>
<link href="https://hdl.handle.net/1721.1/157200" rel="alternate"/>
<author>
<name>Garcia III, George Reuben</name>
</author>
<id>https://hdl.handle.net/1721.1/157200</id>
<updated>2024-10-10T03:49:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Effects of the US Capitol Attack on political views in Argentina, Brazil, and Chile
Garcia III, George Reuben
Is it possible for major political events, such as the U.S. Capitol insurrection on Jan. 6, 2021, to influence political attitudes in other countries? Such events may act as framing devices that influence individuals to think somewhat differently about democracy and populism, primarily by reminding them of domestic shortcomings. Some previous literature has found international attitude effects from major events like terrorism or environmental disasters. In this study, I take advantage of the fact that the insurrection took place in the middle of a set of surveys administered to bureaucrats in Argentina, Brazil, and Chile. The events of Jan. 6 thus act as a type of exogenous shock, thus allowing for an interrupted time series analysis. I find that satisfaction with democracy generally declined across all three countries but only in Chile did support for democracy and elections fall and populist attitudes rise.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailoring the angular and spectral reflectance characteristics of color-dynamic films by modifying their photonic texture and topcoat roughness</title>
<link href="https://hdl.handle.net/1721.1/157199" rel="alternate"/>
<author>
<name>Blair, Andrew D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157199</id>
<updated>2024-10-10T03:09:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Tailoring the angular and spectral reflectance characteristics of color-dynamic films by modifying their photonic texture and topcoat roughness
Blair, Andrew D.
Controlling nano- and microscale morphology is essential for tailoring the appearance of structurally colored stretchy films. An effective approach for controlling the optical properties of such color-dynamic photonic films, which are manufactured holographically, is demonstrated using two simple control handles: the texture of the photonic structure and the surface roughness of a transmissive topcoat. Texture of the photonic structure affects the spectral signature and angular distribution of reflected light. Surface roughness of the topcoat affects the angular distribution of incident and reflected light. Fourier optics concepts are harnessed for modeling and predicting the optical characteristics of the materials as a function of their photonic texture and topcoat roughness. The model is verified with data obtained by imaging the angular scattering distribution and spectroscopic analysis of four representative combinations of photonic texture and surface coat roughness. The findings presented in this thesis validate the hypothesis that controlling texture of the photonic film and roughness of its topcoat allows for tailoring the visual appearance of structurally colored materials. This approach provides access to a rich design space of different appearances, including strong iridescence, color constancy with collimated light sources at small angles of incidence, pure and muted colors, and specular and highly diffuse reflections.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanics of Three-Dimensional Micro-Architected Interpenetrating Phase Composites</title>
<link href="https://hdl.handle.net/1721.1/157198" rel="alternate"/>
<author>
<name>Chen, Andrew Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157198</id>
<updated>2024-10-10T03:03:12Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Mechanics of Three-Dimensional Micro-Architected Interpenetrating Phase Composites
Chen, Andrew Y.
The design of modern composite materials, as used in a wide range of engineering applications, is largely derived from a traditional framework based on laminates. While resulting in desirable strength and stiffness properties, the laminate-based structure leads to a high degree of anisotropy and unique failure modalities like interlaminar failure, limiting the performance of these composites under complex loading conditions. Meanwhile, recent work in the field of architected materials has yielded a thorough understanding of geometry-dependent material behavior, enabling the development of highly robust architectures with tunable (an)isotropy. However, such advances have focused primarily on describing the response of lightweight architected geometries comprised mostly of air. The effect of adding a load-bearing matrix is not well understood. Here we investigate the effect of geometry and constituent material properties on the mechanics of 3D-architected interpenetrating phase composite (IPC) materials, i.e., two-phase materials consisting of an architected structure surrounded by a matrix. Using computational homogenization, we first predict how resultant coupled stress states in the composite change with the material properties of each individual phase and contextualize the results within traditional stiffness scaling laws. We then demonstrate two robust fabrication pathways for realizing polymer- and carbon-based centimeter-scale architected IPCs with micro-scale features. Using these prototypes, we study the mechanical behavior of the fabricated composites under uniaxial compression, with particular emphasis on the non-linear and failure regimes. We show that independent of the material system, the presence of a load-bearing matrix distributes the stress in the composite, contributing to a high-strength, globally stretchingdominated failure behavior, regardless of nodal connectivity. Moreover, the development of a 3D, highly tortuous pathway for stress delays or prevents catastrophic failure of the traditionally brittle architecture phase, resulting in energy dissipation performance of the composite that exceeds the sum of its individual constituents. Finally, we demonstrate that the composite stress state can be architected using geometric design of the IPC and introduce an example of tunable mechanical response in an architected composite inspired by traditional auxetic metamaterials. Altogether, this work broadens our established understanding of the link between architecture and mechanical performance by considering the framework of interpenetrating phase composites, creating the foundation for a new class of strong, resilient, and programmable materials with architected stress states.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>State and Dynamics Estimation in an Outdoor Multi-Drone Slung Load System</title>
<link href="https://hdl.handle.net/1721.1/157197" rel="alternate"/>
<author>
<name>Merton, Harvey</name>
</author>
<id>https://hdl.handle.net/1721.1/157197</id>
<updated>2024-10-10T03:01:16Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">State and Dynamics Estimation in an Outdoor Multi-Drone Slung Load System
Merton, Harvey
Over the past decade, aerial drones have been used to address problems in areas such as sensing and measurement, inspection, delivery, security, and defense. Adding a load attached to one or more drones using a flexible cable can significantly enhance the capabilities of these platforms. This work aims to develop a multi-drone platform, built on open-source tools such as PX4 and ROS2, that can be used to lift a general slung load in an outdoor environment. Various fidelity simulators, including a pseudo-photo-realistic Gazebo simulator, are developed alongside a functional real world platform for testing load pose estimation methods. A novel cable-based testing apparatus that enables drone translation is used to facilitate stability testing of a quasi-static formation control method for lifting a slung load. This work aims to be the first to use visual feedback to estimate a load’s pose in a multi-drone slung load system operating without external motion capture devices. In simulation, perspective-n-point-based visual estimation achieves position errors of 0.1 m, and geodesic distance attitude errors around 0 ◦ . Real world testing shows errors of 0.2 m and 5 ◦ respectively. Applying extended Kalman filter and unscented Kalman filter formulations, simulated position estimates average around an error of 0 m, while the error noise magnitude is only 6% of the cable length at 0.06 m. Achieving accurate load pose estimates without an inertial measurement unit mounted to the load requires a good cable dynamics model. This work concludes by presenting a novel model for the effect of cables in a drone-slung-load system. A method based on universal differential equations shows promising early results.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Hurdles to Highways: Overcoming Barriers to Robotics Adoption in Supply Chains</title>
<link href="https://hdl.handle.net/1721.1/157194" rel="alternate"/>
<author>
<name>Hegarty, Bartholemew</name>
</author>
<id>https://hdl.handle.net/1721.1/157194</id>
<updated>2024-10-10T03:34:51Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Hurdles to Highways: Overcoming Barriers to Robotics Adoption in Supply Chains
Hegarty, Bartholemew
Macroeconomic events are putting unprecedented pressure on the warehouse industry. Among these are labor shortages, increased operating costs, and the desire for greater customization and higher throughput from these facilities. Focused on these challenges and strategic issues for warehouse applications, this thesis investigates the obstacles to implementing robotic automation in supply chains. The thesis explores this environment and the lens of using three common integration methods. These are the traditional purchase, lease, and emerging robotic-as-a-service (RaaS) model. With these methods in scope, the study incorporates a multicriteria decision-making framework (MCDM) that is built based on an analytical hierarchy process (AHP) and combined with the technique for order of preference by similarity to the ideal solution (TOPSIS). From this framework, the research identifies key decision criteria and their impact on selecting the most suitable integration strategy for automation.&#13;
&#13;
Through a literature review, the study identified the essential criteria for the project design decision. These include infrastructure requirements, system capabilities, usability, provider reputation, project duration, and the total cost of ownership. We then gained insight from industry professionals familiar with automation integration using a focused field study. Furthermore, we underlined practical issues and general opinions on the criteria and how well they correspond to their integration plans. The results highlight notable trade-offs in the decision criteria, emphasizing the need for a more tailored strategy to make automation adoption more efficient.&#13;
&#13;
This thesis provides an effective decision support system to guide the choice of appropriate automation solutions. It helps clarify how decision makers give the most importance to different criteria when implementing robotic automation. The research findings offer helpful details for practitioners navigating the challenging warehouse automation environment. This, therefore, encourages better informed and more efficient decision-making procedures.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Bayesian Inference of Reaction Networks via Guiding</title>
<link href="https://hdl.handle.net/1721.1/157193" rel="alternate"/>
<author>
<name>Arya, Gaurav</name>
</author>
<id>https://hdl.handle.net/1721.1/157193</id>
<updated>2024-10-10T04:00:21Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Automatic Bayesian Inference of Reaction Networks via Guiding
Arya, Gaurav
Jump process models based on chemical reaction networks are ubiquitous, especially in systems biology modeling. However, performing inference on the latent variables and parameters of such models is challenging, particularly when the observations of the system state are noisy and incomplete. This thesis presents CatalystFitting, a system for inferring the latent variables and parameters of stochastic reaction network models given observational data. CatalystFitting provides primitives for performing changes of measure on jump processes. Building on top of these primitives, CatalystFitting further provides a library of strategies for guiding a jump process to match an observation set. These strategies exploit the form of the underlying symbolic reaction network to automatically produce guides optimized to the particular reaction network structure of interest to the modeler, accelerating otherwise costly Bayesian inference procedures. We present inference results on a bistable switch system and a repressilator system.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning Multimodal Extraction of Reaction Data</title>
<link href="https://hdl.handle.net/1721.1/157191" rel="alternate"/>
<author>
<name>Wang, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/157191</id>
<updated>2024-10-10T03:32:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Deep Learning Multimodal Extraction of Reaction Data
Wang, Alex
Automated extraction of structured information from chemistry literature is vital for maintaining up-to-date databases for use in data-driven chemistry. However, comprehensive extractions require reasoning across multiple modalities and the flexibility to generalize across different styles of articles. Our work on OpenChemIE presents a multimodal system that reasons across text, tables, and figures to parse reaction data. In particular, our system is able to infer structures in substrate scope diagrams as well as align reactions with their metadata defined elsewhere. In addition, we explore the chemistry information extraction potential of Vision Language Models (VLM), which allow powerful large language models to leverage visual understanding. Our findings indicate that VLMs still require additional work in order to meet the performance of our bespoke models.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Scalable Electrification Infrastructure in&#13;
Logistics</title>
<link href="https://hdl.handle.net/1721.1/157190" rel="alternate"/>
<author>
<name>Alam, Muhammad Ashhad</name>
</author>
<id>https://hdl.handle.net/1721.1/157190</id>
<updated>2024-10-10T03:46:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Building a Scalable Electrification Infrastructure in&#13;
Logistics
Alam, Muhammad Ashhad
The transportation sector in the US contributes to about a third of all greenhouse gas emissions, about a quarter of which stems from road freight. A major driver of this environmental footprint remains a heavy reliance on trucking—the least fuel-efficient mode of transportation. A key pathway toward freight decarbonization, therefore, involves shifting from internal combustion engines (ICE) to electric powertrains in truck fleets. This work develops analytics-based solutions to support and assess the electrification of long-haul logistics operations, by applying the methods to PepsiCo’s operations in Texas.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verifying Correctness of the Number Theoretic Transform and Fast Number Theoretic Transform in F⋆</title>
<link href="https://hdl.handle.net/1721.1/157189" rel="alternate"/>
<author>
<name>Ono, Rick R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157189</id>
<updated>2024-10-10T03:15:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Verifying Correctness of the Number Theoretic Transform and Fast Number Theoretic Transform in F⋆
Ono, Rick R.
As engineers continue to develop more sophisticated algorithms to optimize cryptographic algorithms, their often simple mathematical specifications become convoluted in the algorithms, from which a class of correctness bugs arise. Because cryptographic algorithms often secure sensitive information, their correctness, and in turn their security is a top priority. The Number Theoretic Transform (NTT) is an algorithm that enables efficient polynomial multiplication and has recently gained importance in post-quantum cryptography. This thesis presents a proof of correctness of the NTT in F⋆ , a proof-oriented programming language that extracts to OCaml, and shows that we can use the NTT to perform polynomial multiplications. We provide an implementation of the Cooley-Tukey fast NTT algorithm and a proof that it matches the original NTT specification. This thesis also presents a representation of polynomials in the F⋆ subset Low*, which extracts to performant C code.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Autonomous UAV Navigation using Millimeter Wave&#13;
Radar</title>
<link href="https://hdl.handle.net/1721.1/157188" rel="alternate"/>
<author>
<name>Herrera, Joshua I.</name>
</author>
<id>https://hdl.handle.net/1721.1/157188</id>
<updated>2024-10-10T03:03:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Autonomous UAV Navigation using Millimeter Wave&#13;
Radar
Herrera, Joshua I.
We present the design, implementation and evaluation of MilliNavigator, an autonomous navigation system for drones capable of mapping, path-planning, self-localizing, and navigating in indoor environments by leveraging strategically-placed millimeter wave anchors. Autonomous drones are an increasingly relevant tool for completing and automating hard-to-reach tasks. State of the art navigation systems rely primarily on cameras and GPS for environmental perception and self-localization. These solutions can impose restrictions on existing systems, which limit their navigable environment to well-lit, outdoors, and unobstructed paths. This thesis presents MilliNavigator, the first system to use millimeter wave radar and anchor-aware path planning to achieve high accuracy, 6DOF, online localization. By generating a localization precision score map from known anchor deployments, the system jointly optimizes travel distance and localization performance. We implemented and evaluated MilliNavigator on a drone built with commercial, off-the-shelf parts. We ran over 165 successful missions across 7 different tag deployments. Our system successfully achieved 7.9cm overall median error and had a 90th percentile error of less than 21cm.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practical Exocompilation for Performance Engineers in&#13;
User-Schedulable Languages</title>
<link href="https://hdl.handle.net/1721.1/157187" rel="alternate"/>
<author>
<name>Qian, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/157187</id>
<updated>2024-10-10T03:26:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Practical Exocompilation for Performance Engineers in&#13;
User-Schedulable Languages
Qian, Kevin
High performance computing libraries provide efficient implementations of common computational kernels. Traditionally, such libraries are written in C or assembly. User-schedulable languages provide performance engineers a productive way to optimize these kernels with welldesigned interfaces which provide users control over performance-relevant decisions and automate unnecessary concerns. Often, this is a tradeoff: too much control with too little automation is tedious to program, and too much automation with too little control will hinder obtaining peak performance. The principle of exocompilation advocates for one end of the extreme: to give performance engineers maximal control over code execution so they can maximize performance, its current implementation in existing systems is impractical to use. This thesis broadly explores ways to make exocompilation a practical solution for performance engineers. We show that providing more control does not necessitate sacrificing automation, as long as the language is designed so that users can build their own automation. We explore the necessary design features to enable such a system, demonstrate the types of automation users can build in the system, and brainstorm ways to further push the amount of control user-schedule languages expose to the user.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>satdatagen: a Python Library for Satellite Sensor Task&#13;
Scheduler Support</title>
<link href="https://hdl.handle.net/1721.1/157185" rel="alternate"/>
<author>
<name>Golden, Adina H.</name>
</author>
<id>https://hdl.handle.net/1721.1/157185</id>
<updated>2024-10-10T03:02:44Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">satdatagen: a Python Library for Satellite Sensor Task&#13;
Scheduler Support
Golden, Adina H.
The number of objects in Earth’s orbit is increasing rapidly, raising urgency for intensified observations of satellites and other resident space objects (RSOs) to manage space traffic and prevent collisions. Current methods for RSO detection and tracking rely on ground-based and space-based observatories with optical or radar sensors, but these telescopes require complex scheduling to achieve surveillance of all objects. Previous works have implemented scheduling algorithms and machine learning models that optimize the assignment of tasks to the sensors for RSO observations. However, prior methodologies rely on different datasets, making it hard to make comparisons across methods. This paper presents satdatagen: a software package that generates datasets that can be used as inputs to sensor task schedulers. The datasets generated from the satdatagen library are intended to be used as a baseline input to satellite sensor task schedulers. The datasets contain information about every satellite that passes in view of the sensor such as its angle of altitude and its brightness. Additionally, actual cloud cover data is included for optical telescopes that need to take visibility into account while scheduling observations. satdatagen is simple to use, and does not require excess outside knowledge from developers of scheduling tools.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motion Phantom Development for MRI</title>
<link href="https://hdl.handle.net/1721.1/157184" rel="alternate"/>
<author>
<name>Liu, Kerlina</name>
</author>
<id>https://hdl.handle.net/1721.1/157184</id>
<updated>2024-10-10T03:48:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Motion Phantom Development for MRI
Liu, Kerlina
The development of magnetic resonance imaging (MRI) has enabled health care professionals to non-invasively visualize subjects' soft tissue for medical diagnosis. Since it's conception, artifacts due to patients' movement have shown themselves to be an issue and an assortment of tools and methods have been developed to help mitigate the effect of motion on MRI but such mitigation methods are generally only applicable on a case by case basis depending on the specific type of motion. As such, additional research is required to develop novel methods and a standardized method of testing, validating, and ultimately comparing mitigation strategies.&#13;
&#13;
This work provides a design to develop a motion stage as well as build instructions for the Martinos head phantom which moves in four degrees of freedom (linear translation in the plane parallel to the floor, a head shaking "no" motion, and a head nodding "yes" motion) independently of one another to limited success. Only the translation in direction (into and out of the bore hole, along the z-axis) worked as expected, while the translation perpendicular to it (x-axis) did not. The total range of motion that head phantom was capable of turning in the head shaking/"no" motion was approximately 19 degrees, though the torque required is on the higher end (on the order of 0.06 N*m) and the position of the rotational actuator needs some reexamination. The head nodding/"yes" mechanism is more promising, allowing for a tilt downwards of 1 degrees and upwards of 2 degrees, but requires actuators capable of exerting 6N of force or more.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Streamoscope: A Low-Cost, Open-Source, USB-3-Capable Streaming Data Acquisition System for Low-Field MRI</title>
<link href="https://hdl.handle.net/1721.1/157183" rel="alternate"/>
<author>
<name>Feld, Joseph W.</name>
</author>
<id>https://hdl.handle.net/1721.1/157183</id>
<updated>2024-10-10T03:45:14Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Streamoscope: A Low-Cost, Open-Source, USB-3-Capable Streaming Data Acquisition System for Low-Field MRI
Feld, Joseph W.
Magnetic Resonance Imaging (MRI) is a powerful, safe imaging technique based on using magnetism to provide contrast between soft tissues. Portable, low-field MRI is a growing area that has already demonstrated value in both educational and clinical domains. Low-field MRI systems need to acquire data with sample rates in the tens of megahertz, which can make the data acquisition system the bulk of the overall cost of low-cost systems. This work presents the Streamoscope: an open-source data acquisition system designed for low-field MRI that streams two 14-bit resolution channels at 60 megasamples per second over USB-3 into Python. It is approximately $300 in parts, about a quarter of the price of the cheapest data acquisition system on the market that would work in our case study. The Streamoscope can stream full-sample-rate raw MRI data into a computer to be processed in Python, enabling real time imaging. The system has been validated by generating 2D images of a phantom on a system with an 8 MHz Larmor frequency.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Health Divide: Achieving Equitable Healthcare Access in Kenya through Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/157182" rel="alternate"/>
<author>
<name>Nyakiongora, Geoffrey Mosoti</name>
</author>
<id>https://hdl.handle.net/1721.1/157182</id>
<updated>2024-10-10T03:00:52Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Bridging the Health Divide: Achieving Equitable Healthcare Access in Kenya through Artificial Intelligence
Nyakiongora, Geoffrey Mosoti
This research explores the innovative application of Artificial Intelligence (AI), particularly Generative Pre-trained Transformer (GPT) models, in designing culturally sensitive hospitals for rural Kenya. The research addresses the critical need for improved healthcare infrastructure in underserved areas, focusing on the potential of AI to create efficient, adaptable, and contextually appropriate hospital designs. The study employs a mixed-methods approach, combining qualitative analysis of cultural practices and healthcare needs with quantitative data on environmental factors and health statistics. A GPT model is developed and fine-tuned on a comprehensive dataset of Kenyan cultural information, healthcare data, and architectural knowledge. This AI model is then used to generate hospital design concepts that are evaluated against newly developed cultural sensitivity metrics. Key findings demonstrate the potential of AI to significantly reduce design time, improve space utilization, and enhance cultural appropriateness in hospital designs. The thesis also highlights the importance of human-AI collaboration, with local experts and community representatives playing crucial roles in refining and implementing AI-generated concepts. Challenges identified include data quality and availability in rural settings, the need for ongoing model refinement, and the importance of establishing ethical guidelines for AI use in healthcare design. The thesis concludes with a set of recommendations for implementing AI-driven, culturally sensitive hospital design processes in rural Kenya, including the development of specialized AI models, and establishment of collaborative design methodologies. These findings have significant implications for improving healthcare infrastructure in resource-constrained settings and offer a model for culturally sensitive, AI-driven architectural design in developing contexts globally.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Shape of Kubler: George A. Kubler in Peru, 1948-49</title>
<link href="https://hdl.handle.net/1721.1/157181" rel="alternate"/>
<author>
<name>Schweig, Johann</name>
</author>
<id>https://hdl.handle.net/1721.1/157181</id>
<updated>2024-10-10T03:10:50Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Shape of Kubler: George A. Kubler in Peru, 1948-49
Schweig, Johann
Yale art history professor George Kubler’s seminal 1962 publication The Shape of Time is, according to his own words, representative of a “crossroads between the history and anthropology of art.” This work does not stand alone, but is rather part of a larger corpus of study through which Kubler recurred to disciplines, methods and tools outside of what is traditionally considered art historical—including anthropology, architectural representation, and biology—in order to generate new readings and understandings of the history of South and Central American art. This thesis takes a look into a year of Kubler’s life in 1948-49, spent in Peru conducting archival research and field work on culture change with the Institute for Social Anthropology at the Smithsonian Institution and teaching a seminar on the use of archival sources in ethnology at Universidad Nacional Mayor de San Marcos in Lima; during this time, Kubler also engaged in the construction of an archive of his own. Drawing from correspondence and other records from the period in question, a series of lost episodes resurface, providing a reconstruction of various strata of 1940s Peruvian society: an increasingly cosmopolitan Lima stands in stark contrast to the underdeveloped, feudal Andean world, evidencing its colonial underpinnings. I contend that witnessing the coexistence of various temporalities within a single geographic territory had a significant impact on Kubler’s later theories on spatialized historical time.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beans to Bytes: Grey-Box Nonlinear System&#13;
Identification Using Hybrid Physics-Neural Network&#13;
Models</title>
<link href="https://hdl.handle.net/1721.1/157179" rel="alternate"/>
<author>
<name>Pronk, Morgen</name>
</author>
<id>https://hdl.handle.net/1721.1/157179</id>
<updated>2024-10-10T03:51:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Beans to Bytes: Grey-Box Nonlinear System&#13;
Identification Using Hybrid Physics-Neural Network&#13;
Models
Pronk, Morgen
The advancement of neural networks in the last several years has yielded some astonishing results. However, the applicability to system identification and modelling dynamical systems still has a great amount of room for exploration. This thesis reviews different neural network architectures and their application to complex non-linear dynamic system identification. In particular, it uses the intricate process of coffee roasting as a case study to explore and demonstrate these techniques. Coffee roasting is a complex process that requires precise control to achieve the desired coffee quality. The ability to develop models that represent a system, i.e. system identification, is of great value to industry. Coffee roasting poses several challenges for system identification from complex chemical reactions occurring inside the bean, to temperature trajectories being dependent on several states that cannot be explicitly measured, such as moisture content, or reaction rate, making it an ideal candidate for exploring the application and limitations of neural networks. The primary contributions of this study are a proposed "grey-box" model that augments previously established physics based models, as well as illustrating the limits of LSTM, Deep NARX models using "one-step" forward prediction techniques. Although the study focuses explicitly on coffee roasting, the conclusions drawn are applicable to other similarly complex industrial and manufacturing processes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Agent Reinforcement Learning for Autonomous Robotics</title>
<link href="https://hdl.handle.net/1721.1/157178" rel="alternate"/>
<author>
<name>Vincent, Caroline R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157178</id>
<updated>2024-10-10T03:42:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Multi-Agent Reinforcement Learning for Autonomous Robotics
Vincent, Caroline R.
Technological advancements in autonomous robotics, including autonomous vehicles, have created new opportunities for innovative solutions to many everyday challenges. The impact of integrating robotic agents into real-world applications may be significantly enhanced by leveraging advancements in multi-agent autonomous systems. However, the coordination required in multi-agent systems demands complex motion planning to deconflict actions and prevent collisions of vehicles moving at increasingly high speeds. This thesis explores the application of multi-agent reinforcement learning (MARL) to autonomous robotics by teaching a central controller to navigate multiple agents across various environments without collisions. The simulated scenarios range from simple, obstacle-free environments to complex environments with obstacles configured to form narrow passageways or represent other complexities in dense urban environments. The findings demonstrate the potential of MARL to achieve high accuracy in navigating these different environments, highlighting the method's flexibility and adaptability across diverse settings and the resulting implications for applying MARL to real-world scenarios.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study on Deploying Large Language Models as Agents</title>
<link href="https://hdl.handle.net/1721.1/157177" rel="alternate"/>
<author>
<name>Cao, Jiannan</name>
</author>
<id>https://hdl.handle.net/1721.1/157177</id>
<updated>2024-10-10T03:04:37Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Study on Deploying Large Language Models as Agents
Cao, Jiannan
This thesis investigates the deployment and utilization of Large Language Models (LLMs) as agents, exploring their potential in automating workflows and enhancing user interactions. The study begins with an in-depth analysis of language models, tracing their evolution from pure statistical models to advanced neural network architectures like Transformers and their bidirectional variants. It then delves into the operational framework of LLM agents, detailing user interactions, environmental considerations, memory management, task planning, and tool use. The study addresses critical limitations in LLM inputs, such as the context window and introduces Retrieval-Augmented Generation (RAG) as a solution to extend the model’s capability. Key APIs provided by OpenAI for deploying GPT models are discussed, highlighting their functionalities and applications. Finally, the practical application of LLMs in creating Robotic Process Automation (RPA) workflows is demonstrated through a divide-and-conquer methodology, showcasing the efficiency, scalability, flexibility, and accuracy of this approach. This comprehensive study underscores the transformative impact of LLMs in automating complex processes and enhancing user experiences through intelligent agent deployment.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cross-Shelf Exchange Driven by Dense Flow Down a Canyon</title>
<link href="https://hdl.handle.net/1721.1/157175" rel="alternate"/>
<author>
<name>Mier, Christian M.</name>
</author>
<id>https://hdl.handle.net/1721.1/157175</id>
<updated>2024-10-10T03:52:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Cross-Shelf Exchange Driven by Dense Flow Down a Canyon
Mier, Christian M.
Laboratory experiments investigated the dynamics controlling the cross-shelf exchange in a prograde sloping canyon induced by dense shelf water descending into the canyon. This thesis is motivated by the dispersal of dense water generated by polynyas on the Arctic and Antarctic continental shelves. Laboratory results corroborate prior numerical results suggesting that canyons are hotspots of cross-shelf exchange. When the dense water descends a canyon, it induces an onshore return flow of offshore water into the canyon. This return flow is initially driven by the dense water eddies descending the canyon and acting like a bucket brigade. At later times, another mechanism may also be at play where large dense cyclonic (anticlockwise) eddies on the northern continental shelf may pull more dense water out of the canyon producing a region of low pressure, near the canyon head, which induces an increase in ambient flow into the canyon. The Burger number (Rossby radius of deformation/canyon width) and the dense water source location with respect to the canyon head affect the offshore ambient water velocity up the canyon. Additionally, as the offshore water reaches the canyon head, the offshore water volume flux becomes larger than the dense water volume flux, possibly due to the low pressure region described above. Understanding these dynamics in the Antarctica region is of global significance for two main reasons: 1. The offshore flowing dense water forms Antarctic Bottom Water and thus affects the global meridional circulation; 2. The onshore heat transport induced by the return flow drives glacial ice melt and therefore contributes to sea level rise.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding tumor cell plasticity in spatial transcriptomics with graph attention networks and walk-based pseudotime analysis</title>
<link href="https://hdl.handle.net/1721.1/157173" rel="alternate"/>
<author>
<name>Zamora, Izabella</name>
</author>
<id>https://hdl.handle.net/1721.1/157173</id>
<updated>2024-10-10T03:52:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding tumor cell plasticity in spatial transcriptomics with graph attention networks and walk-based pseudotime analysis
Zamora, Izabella
Tumor cell plasticity in cancer is a key driver in tumor progression, heterogeneity, metastasis, and treatment resistance. Tumor cells change states from the conventionally easier to treat epithelial state to the more resistant mesenchymal state. Understanding the transition dynamics of these states and the extrinsic factors influencing them is crucial for improving therapeutic strategies and patient outcomes. Utilizing spatial transcriptomics extrinsic driving factors of plasticity can be probed. We introduce PlastiNet, which uses a graphical attention-based network to create a spatial aware embedding. The utility of our approach is validated in model systems, specifically in the brain and colon, where it successfully identifies biologically relevant neighborhoods and maps differentiation pathways. When applied to pancreatic ductal adenocarcinoma (PDAC), distinct, conserved neighborhoods within the tissue, including diverse immune and cancer clusters. By estimating a differentiation path from epithelial to mesenchymal-like cells, we can identify intermediate states despite a limited set of tumor marker genes. This cellular differentiation path shows enrichment and depletion of certain cell types within local neighborhoods aligning with known correlations, and by leveraging inferred ligand-receptor interactions, we can pinpoint potential drivers of plasticity to test in vitro. PlastiNet effectively generates hypotheses directly from patient-derived spatial transcriptomics samples, offering insights into the cellular mechanisms driving tumor plasticity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Origins of the East Greenland Coastal Current on the Northeast Greenland Shelf: a Comparison of Two Reanalysis Products</title>
<link href="https://hdl.handle.net/1721.1/157172" rel="alternate"/>
<author>
<name>Vianco, Sara L.</name>
</author>
<id>https://hdl.handle.net/1721.1/157172</id>
<updated>2024-10-10T03:06:01Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Origins of the East Greenland Coastal Current on the Northeast Greenland Shelf: a Comparison of Two Reanalysis Products
Vianco, Sara L.
The East Greenland Coastal Current (EGCC) carries some of the freshest outflow from the Arctic southward along the East Greenland Shelf and into the Nordic Seas and subpolar North Atlantic. How this fresh water initially flows onto the Northeast Greenland Shelf (NEGS) and feeds the EGCC is not well known due in part to the lack of observations in the region. In this thesis, I use two ocean reanalyses, the Regional Arctic Ocean/sea-ice Reanalysis (RARE) and Global Ocean Physics Reanalysis (GLORYS) to explore the structure and dynamics of the ocean circulation on the NEGS. To validate the use of these products in the region, I compare the reanalysis products to the Fram Strait Arctic Outflow Observatory for the period of 2003-2019. In the mean, RARE is too warm and salty compared to the moorings, while the properties in GLORYS track more closely to the observations. However, the observed velocity field is better represented in RARE than GLORYS. From there, I analyze the cross-shelfbreak flow from 74°N to 81.5°N in the two reanalysis products, and conclude that transport onto the NEGS of waters fresher than 34 salinity is driven by an Ekman circulation that arises from along-shelfbreak winds and a widening shelf south of 81.5°N. The enhanced transport of fresh water also shifts the isohalines across the shelfbreak, directing a geostrophic flow onshelf between 81°N and 79°N. The convergence of fresh water on the NEGS initiates the EGCC as an identifiable and distinct feature around 80°N in RARE, uniting the EGCC along the southwest coast of Greenland and its northern counterpart, the Polar Surface Water (PSW) Jet. In GLORYS, the EGCC is not present throughout the domain, though there is a weak net southward flow on the NEGS. The EGCC in RARE is primarily buoyancy-driven, though the along-coast winds likely play a major role in maintaining the density front that supports the EGCC. Results from this thesis have implications for the transport and fate of Arctic and Greenland-sourced fresh water, and stratification in the high latitude North Atlantic and Nordic Seas.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On-Stack Replacement Across User-Kernel Boundaries</title>
<link href="https://hdl.handle.net/1721.1/157170" rel="alternate"/>
<author>
<name>Mohr, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/157170</id>
<updated>2024-10-10T03:27:41Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">On-Stack Replacement Across User-Kernel Boundaries
Mohr, Katherine
In large, distributed computations with small amounts of work done at each node, networking latencies quickly add up, especially in comparison to the time taken to execute small tasks. As such, lowering network latencies is crucial to getting good performance. Previous research has shown that often the largest contributors to network latencies are data copies between kernel and application buffers. Conventional wisdom argues that to solve this problem, one should move the networking stack out of the kernel and into the user space or networking hardware. Instead, we build upon an alternative approach, known as LakePlacid. LakePlacid mitigates the kernel-user boundary overhead issue by moving the most important application logic out of the user space and into the kernel. This thesis proposes and implements a key improvement to LakePlacid. Because only part of the application logic is migrated to the kernel, some packets necessarily must be resolved in the standard user space application. The system discussed in this thesis allows packets which cannot be handled in the kernel to seamlessly continue in user space via on-stack replacement, thus preventing side effects from being executed erroneously. This system for on-stack replacement is very general, allowing execution to switch between code versions at any conditional, and it is novel in its ability to switch stacks across the user-kernel boundary. With this change, LakePlacid is able to better maintain the semantics of user applications, making it more feasible in practice.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models</title>
<link href="https://hdl.handle.net/1721.1/157169" rel="alternate"/>
<author>
<name>Figueroa, Reinaldo</name>
</author>
<id>https://hdl.handle.net/1721.1/157169</id>
<updated>2024-10-10T03:48:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Evaluating Adaptive Layer Freezing through Hyperparameter Optimization for Enhanced Fine-Tuning Performance of Language Models
Figueroa, Reinaldo
Language models are initially trained on large datasets, enabling them to extract patterns and establish rich contextual connections. When dealing with data scarcity, transfer learning has become the go-to method to use these models in specialized downstream tasks via fine-tuning. However, fine-tuning on small datasets can lead to overfitting and a lack of generalization. Generalization is crucial when deploying models that perform a sensitive tasks in a real world environment, as it dictates how well it performs on unseen data. Conversely, overfitting is highly likely to occur when training on small datasets. This thesis proposes and evaluates a new method for fine-tuning language models by adaptively choosing specific learning rates for each transformer layer that provide higher performance on in-domain low-volume datasets. Additionally, we explore which layers inside the models usually hold more contextual information from pre-training that might be valuable to keep ‘frozen’ when fine-tuning on small datasets. This analysis provides insights into fine-tuning approaches during initial experiments when data is limited. Our results demonstrate limited performance gains on certain models while achieving more significant gains on others when fine-tuning using our proposed method. Additionally, our work also provides valuable insight into per-layer importance of language models by showing that certain layers have a stronger direct correlation with the overall model accuracy.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Efficacy of Different Analysis Algorithms for Summarizing Online Deliberations</title>
<link href="https://hdl.handle.net/1721.1/157168" rel="alternate"/>
<author>
<name>Venkat, Naveen</name>
</author>
<id>https://hdl.handle.net/1721.1/157168</id>
<updated>2024-10-10T03:39:43Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Efficacy of Different Analysis Algorithms for Summarizing Online Deliberations
Venkat, Naveen
For the past decade, online deliberation platforms like Polis have expanded the reach of deliberative democracy, which calls for political decisions to be based on the results of fair and balanced discussions among citizens, by enabling larger deliberations. However, as these discussions often generate a large volume of comments, which is infeasible for policymakers to thoroughly review, these platforms often include analysis algorithms that distill the conversation into a small set of comments, which policy-makers can use as the base of citizen input into decision-making. While Polis currently provides a clustering-analysis summary of the discussion, two newer aggregation algorithms, inspired by computational social choice theory and abstract argumentation theory, have recently been proposed. These algorithms seek to provide more representative (i.e. portraying all perspectives) and consistent (i.e. comments within a perspective do not oppose each other) summaries of the discussion, respectively. Still, though these newer algorithms may have theoretical advantages over Polis’s current methods, they have yet to be evaluated in a real-world application. Through a randomized controlled trial of all three approaches using a nationally representative sample, we compare their practical effectiveness, as measured by participants’ subjective experiences regarding how well these summaries represent their concerns. We find that the computational social choice-inspired algorithm consistently outperforms Polis’s current methods in this regard, though future theoretical work is still needed to fully adapt this approach to a real-world setting.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial Prompt Transformation for Systematic&#13;
Jailbreaks of LLMs</title>
<link href="https://hdl.handle.net/1721.1/157167" rel="alternate"/>
<author>
<name>Awoufack, Kevin E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157167</id>
<updated>2024-10-10T03:16:48Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Adversarial Prompt Transformation for Systematic&#13;
Jailbreaks of LLMs
Awoufack, Kevin E.
The rapid integration of Large Language Models (LLMs) like OpenAI’s GPT series into diverse sectors has significantly enhanced digital interactions but also introduced new security challenges, notably the risk of "jailbreaking" where inputs cause models to deviate from their operational guidelines. This vulnerability poses risks such as misinformation spread and privacy breaches, highlighting the need for robust security measures. Traditional red-teaming methods, involving manually crafted prompts to test model vulnerabilities, are labor-intensive and lack scalability. This thesis proposes a novel automated approach using Reinforcement Learning from Human Feedback (RLHF) to transform unsuccessful adversarial prompts into a successful jailbreak. Thus it learns a policy based on relation to existing jailbreak prompts that informs the generator LLM of what makes an adversarial prompt successful. This was implemented using Proximal Policy Optimization (PPO) and tested with both a classifier and judge reward model, attaining at best a 16% attach success rate on a target model. This research can be applied to any prompt at the word level and further analyzed on characteristics of toxicity. This work contributes to advancing LLM security measures, ensuring their safer deployment across various applications.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Computational Tool for Simplifying Engineering Tradeoff Analysis for the Design of Cost-Optimized, Time-Variant, Electrodialysis Reversal Desalination Systems</title>
<link href="https://hdl.handle.net/1721.1/157166" rel="alternate"/>
<author>
<name>Costello, Jeffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/157166</id>
<updated>2024-10-10T03:34:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Development of a Computational Tool for Simplifying Engineering Tradeoff Analysis for the Design of Cost-Optimized, Time-Variant, Electrodialysis Reversal Desalination Systems
Costello, Jeffrey
This study presents an analytical tool for characterizing a wide swath of the designspace for time-variant electrodialysis reversal brackish water desalination (TEDR) while avoiding the computation time oft required by mechanistic models of electrodialysis reversal (EDR) and time-variant processes. In place of explicit computation, this paper proposes a simplifying assumptions to simulate desalination power and production rate of a TEDR process without explicit computation, enabling rapid year-long simulation and system optimization. The output of the model is compared to experimental data from a pilot TEDR system and found to have good agreement between desalination power and production rate. Disagreement between the modeled and experimental pressure losses suggesting additional losses in the experiment which may be accounted for in future work. Two case studies, one case for potable water in the American Southwest and another case for irrigation water in the Middle-East and North Africa (MENA) region, compare the results from 54 optimized systems. The results illustrate the complexity of system design and selection, elucidating tradeoffs between different models of electrodialysis (EDR) stacks, operating modes, and system configurations. The output of this model will enable system designers to confidently design and implement cost-effective TEDR systems to combat rising global freshwater scarcity.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Process Substitution on Manufacturing Costs: A&#13;
Comparative Analysis of Sheet Metal Forming versus Extruded Steel Cutting</title>
<link href="https://hdl.handle.net/1721.1/157164" rel="alternate"/>
<author>
<name>Talal, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/157164</id>
<updated>2024-10-10T03:59:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Impact of Process Substitution on Manufacturing Costs: A&#13;
Comparative Analysis of Sheet Metal Forming versus Extruded Steel Cutting
Talal, Omar
Sheet metal manufacturers continuously seek methods to enhance automation and reduce costs. This thesis explores process substitution and design standardization through a parameter-driven cost model and case studies applying Design for Manufacturability &amp; Assembly (DFMA) principles. Specifically, it evaluates substituting conventional sheet metal components with extruded steel profiles and replacing manual press brake operations with automated tube laser cutting. The findings show that tube laser adoption across a broad range of channels can reduce costs by 49% to 79%, with a payback period of under two years, even in scenarios with fluctuating raw material prices. The study proposes strategies for maximizing tube laser utilization through product mix analysis, redesign for compatibility, and designing with tube laser as the primary method. A developed automation tool using clustering aids profile identification, though the study highlights the need for improved data management around C-channel dimensions to enhance process standardization. The investigation confirms that extruded steel can be a cost-effective alternative to large-scale channel products, providing solutions for industry transition through direct replacement, compatibility-focused redesign, or design guidelines optimized for extruded steel.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Koopman Operator Theory to Legged Locomotion</title>
<link href="https://hdl.handle.net/1721.1/157163" rel="alternate"/>
<author>
<name>Terrones, Jasmine G.</name>
</author>
<id>https://hdl.handle.net/1721.1/157163</id>
<updated>2024-10-10T03:45:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Application of Koopman Operator Theory to Legged Locomotion
Terrones, Jasmine G.
Nonlinearities from complicated robot systems and harsh contact dynamics have long impeded the effectiveness of optimal control strategies for legged robots. In this work, we present a linearized simple walking model using Koopman Operator Theory, and its usage in Linear Model Predictive Control (L-MPC). Various walking and contact models were evaluated, but ultimately the rimless wheel was selected due to its inherent stability and low dimensionality, and a nonlinear viscoelastic model was used to accurately capture floor contact and impact dynamics. Koopman models were developed using both Radial Basis Functions (RBFs) and neural network-generated observables for the passive rimless wheel. A novel actuation method with linear actuators, combined with the Control Coherent Koopman methodology, resulted in accurate linear models that effectively enabled L-MPC to control the wheel on flat ground. This model outperformed those created using the more traditional Dynamic Mode Decomposition with Control method. This work demonstrates the power of Koopman linearization to produce a unified set of linear dynamical equations that encompass various contact and non-contact configurations and demonstrates the effectiveness of the Control Coherent Koopman methodology in generating an accurate input matrix across these different contact modes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, simulation, and testing of a low cost laser&#13;
micromachining system for flexible and rapid tissue-on-chip&#13;
fabrication.</title>
<link href="https://hdl.handle.net/1721.1/157161" rel="alternate"/>
<author>
<name>Nin, Jorge A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157161</id>
<updated>2024-10-10T03:09:24Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Design, simulation, and testing of a low cost laser&#13;
micromachining system for flexible and rapid tissue-on-chip&#13;
fabrication.
Nin, Jorge A.
This study introduces a novel approach to tissue-on-chip device fabrication using low-cost picosecond laser ablation, addressing critical limitations in current manufacturing methods such as soft lithography, particularly in terms of material compatibility, feature resolution, and scalability. We developed a comprehensive finite element method (FEM) model for the laser ablation process, incorporating key physical phenomena including laser-material interactions, heat transfer, and material removal dynamics. This model, validated against experimental results, accurately predicts ablation depths within 20% of measured values across a range of laser parameters. Our experimental setup, utilizing a cost-effective 10 kHz picosecond laser system, demonstrates superior capabilities in creating high-aspect-ratio microchannels exceeding 20:1, surpassing traditional manufacturing techniques. We achieve precise control over channel dimensions, with widths ranging from 20 to 500 micrometers and depths up to 1 mm, while maintaining sub-micron surface roughness (Ra &lt; 0.8 &#120583;m). The system’s versatility is showcased through the fabrication of complex structures such as Tesla valves and high-resolution text features, with a minimum feature size of 20 &#120583;m. We present practical techniques for component selection and process parameter optimization 3 using our simulation, reducing expensive and time-consuming experimentation. This work establishes low-cost picosecond laser ablation as a viable and advantageous method for tissue-on-chip manufacturing. With fabrication times of 6-8 minutes for small features and less than an hour for a full chip, our method represents a significant advancement in rapid prototyping capabilities. These findings demonstrate that laser ablation is a powerful technique for manufacturing tissueon-chip devices, offering high resolution, flexibility, and scalability. This approach has the potential to overcome the limitations of traditional methods, enabling the next generation of sophisticated, physiologically relevant in vitro models for biomedical research and drug development. The successful development and validation of the FEM model, coupled with practical demonstrations, provide a solid foundation for further advancements in laser-based fabrication of tissue-on-chip devices, potentially accelerating drug discovery processes and enabling more accessible production of personalized medicine platforms.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Design Study Using Simulation Techniques in Roll Form&#13;
Production</title>
<link href="https://hdl.handle.net/1721.1/157160" rel="alternate"/>
<author>
<name>Lee, Joo Won</name>
</author>
<id>https://hdl.handle.net/1721.1/157160</id>
<updated>2024-10-10T03:43:04Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Design Study Using Simulation Techniques in Roll Form&#13;
Production
Lee, Joo Won
Sheet metal roll forming is a continuous bending process where metal strips are fed through a sequence of rolls to achieve a specific cross-sectional profile. This method is vital in the automotive industry for producing high-strength, lightweight components with precision, consistency, and cost-efficiency. This project focuses on optimizing Novelis’s aluminum roll forming process using Computer-Aided Engineering (CAE) techniques, including UBECO Profil, AutoCAD, and Finite Element Analysis (FEA) tools such as Ansys and LS-Dyna. Initial simulations on a square tube profile were key in identifying critical stations, leading to performance improvements through targeted adjustments. Stress and strain analyses revealed how operational factors, such as roll adjustments, affect the section shapes and angles, facilitating the refinement of roll forming station settings. With a Design of Experiment (DOE) framework, the study identified key variables to enhance simulation output accuracy and optimize roll forming settings. The team successfully built a digital twin of the new roll forming line, which accurately predicted the final product's geometry and provided precise recommendations for machine settings to achieve the desired shape. Novelis can apply these insights to enhance their software, thereby potentially increasing production efficiency. This approach not only supports current operations but also lays the foundation for future research and development advancements.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Affordable Fiber Extrusion Device for Educational&#13;
Purposes: Design Improvements, Controls Development,&#13;
and Manufacturing Scale-up</title>
<link href="https://hdl.handle.net/1721.1/157159" rel="alternate"/>
<author>
<name>Zhang, Yiqian</name>
</author>
<id>https://hdl.handle.net/1721.1/157159</id>
<updated>2024-10-10T04:12:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Affordable Fiber Extrusion Device for Educational&#13;
Purposes: Design Improvements, Controls Development,&#13;
and Manufacturing Scale-up
Zhang, Yiqian
The Fiber Extrusion Device (FrED) is an affordable desktop tool intended for engineering education. It mimics the fiber draw process, allowing students to study topics such as data acquisition, control systems, computer vision, data analytics, and smart manufacturing. As an educational tool, the goal of the device is to replicate the practical laboratory experience in remote learning scenarios. FrED has gone through multiple iterations, yet several outstanding issues remain. Building on the 2023 team’s progress, the 2024 project objectives include refining the design, developing controls, scaling up manufacturing, designing the assembly line, managing inventory, creating educational content, and conducting user testing and pilot runs. This thesis specifically details the author’s contributions to enhancing mechanical designs, advancing control systems, increasing production capacity, and planning educational materials. Mechanical components in the frame, the cooling system, and the diameter measurement system were redesigned to improve stiffness and stability. Local PID controllers were implemented for the DC motor and heater, effectively closing the feedback loop for fiber diameter control. The production target of manufacturing 35 FrED units was successfully achieved within the planned timeframe, with the packaging design optimized for efficient shipping. Additionally, an assembly manual, a graphical user interface, and control activities were developed as part of the educational content. Three user testing sessions were conducted to gather feedback.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technology Performance Curves to Inform Government and Private Investment</title>
<link href="https://hdl.handle.net/1721.1/157158" rel="alternate"/>
<author>
<name>Roberts, Matthew R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157158</id>
<updated>2024-10-10T03:23:34Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Technology Performance Curves to Inform Government and Private Investment
Roberts, Matthew R.
Forecasts of technological progress are used to inform decisions in the public and private sectors that shape the modern technology landscape on a global scale. Technology performance curves are the quantitative, model-based representations of technological change employed in industrial, economic, and integrated assessment models to inform decision-making processes. Technology performance curves have evolved from their origins in the 1920s modeling of airframe manufacturing labor cost to consider mechanisms of technological progress, including learning-by-doing, learning-by-searching, economies of scale, and exogenous improvement. Examining changes to the performance and prevalence of technologies can provide insight that is relevant for product strategy and market forecasts. This knowledge can also help estimate the potential impact of government market policy and funding for research and development. This thesis seeks to consolidate the available literature on the various models of technology performance curves into a conceptual framework that can be used to understand the features and limitations of models, and their potential use cases.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Government Policies in Middle Eastern&#13;
Countries on Digital Platform Startups</title>
<link href="https://hdl.handle.net/1721.1/157157" rel="alternate"/>
<author>
<name>Ali Osman, Mohamed Mamdouh</name>
</author>
<id>https://hdl.handle.net/1721.1/157157</id>
<updated>2024-10-10T03:39:37Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Impact of Government Policies in Middle Eastern&#13;
Countries on Digital Platform Startups
Ali Osman, Mohamed Mamdouh
In the last decade, the financial sector has changed significantly. The introduction of new technologies and mobile applications transformed the entire industry, leading to the rise of financial technology (fintech startups). Fintech startups offer a wide range of products/services, such as digital payments, Buy Now, Pay Later (BNPL), crowdfunding, peer-to-peer lending, etc. Middle East and North African (MENA) countries have seen significant growth in the number of fintech startups and the total investment value in these companies. For example, in Egypt, Fawry is the biggest payment service provider; it covers nearly 25% of Egyptian customers and has more than 3 million daily operations. Also, some fintech companies in MENA became unicorns, such as Tabby of Saudi Arabia and MNT-Halan of Egypt. The increased penetration of fintech in MENA countries has consistently raised concerns about data security, consumer protection, and financial stability that these companies can cause. This always raised a couple of questions for the financial sector authorities or regulators: how these authorities can increase the number of these companies to support financial inclusion and growth of financial sectors and, at the same time, alleviate the dangers and concerns that these fintech companies present. This thesis provides a comprehensive analysis of the growth of fintech startups in the MENA region, focusing on four countries: Egypt, Saudi Arabia, UAE, and Jordan. Then, the study investigates the fintech regulations in these countries. This study aims to understand how recent regulations have impacted the growth of fintech startups through qualitative insights and case studies from four countries. The study reveals the following: First, Jordan's fintech regulations are still in their early stages. Despite having some fintech regulations, significant regulations such as data protection and cyber security laws still need to be made available. The absence of some fintech regulations might cause investors and entrepreneurs not to launch or expand their fintech businesses in Jordan. Second, in Egypt, the fintech regulations align with investors' and entrepreneurs' expectations; however, the economic conditions-budget deficit and currency fluctuations might hinder the growth of the fintech sector in Egypt. Third, for Saudi and UAE, the fintech ecosystem and regulations encouraged entrepreneurs to start and grow their businesses and customers to increase the adoption of fintech products and services. The development of regulations, laws, and guidelines in both countries contributed to the growth of the fintech sector and, at the same time, safeguard customers.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Sandbar Effects on Nearshore Waves and Morphological Change using SWAN</title>
<link href="https://hdl.handle.net/1721.1/157153" rel="alternate"/>
<author>
<name>Murman, Charles E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157153</id>
<updated>2024-10-10T03:28:00Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Modeling Sandbar Effects on Nearshore Waves and Morphological Change using SWAN
Murman, Charles E.
Numerical model simulations (Delft3D SWAN) are used to examine the impact of small alongshore variations in the bathymetry of an outer sandbar (in about 5-m water depth) on the nearshore wave field as the shallow (&lt; 3 m) bathymetry changes from near alongshore uniform to strongly spatially variable to understand wave driven morphologic evolution. Waves were observed at Duck, NC with an array of 14 pressure gages between 1- and 3-m water depth spread over 250 meters alongshore. Bathymetry was measured between the dune toe and about 8-m water depth on September 26 and October 2, 2013. The bathymetry evolved from roughly alongshore uniform on September 26 to strongly alongshore variable on October 2. Between these dates incident significant wave heights ranged from 0.5 meters to 2.3 meters, with incident angles from 20 degrees north to 5 degrees south of shore normal. Simulations were run with observed bathymetry for both the outer bar and inner shallow bathymetry, with smoothed outer bar and observed shallow bathymetry, and with digital elevation model bathymetry to determine the effects of outer bar and shallow bathymetry on wave evolution.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Illusion of Wetness: Cold Dry Stimuli in Sensory Perception</title>
<link href="https://hdl.handle.net/1721.1/157152" rel="alternate"/>
<author>
<name>Ozor-Ilo, Ozioma</name>
</author>
<id>https://hdl.handle.net/1721.1/157152</id>
<updated>2024-10-10T04:00:03Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Investigating the Illusion of Wetness: Cold Dry Stimuli in Sensory Perception
Ozor-Ilo, Ozioma
Humans lack specialized receptors for perceiving wetness and so it is a compound sensation based on changes in skin temperature and contact pressure that are sensed by thermoreceptors and mechanoreceptors in the skin. In addition to perceiving the wetness of damp fabrics in contact with the skin or the presence of sweat on the skin, humans can perceive wetness in the absence of any moisture, a phenomenon known as illusory wetness. The illusion has been shown to arise when the skin is in contact with a surface and is cooled.   This thesis is focused on understanding the variables that contribute to illusory wetness by first determining the difference threshold for perceiving the rate of skin cooling and relating this to perceived wetness. The results from the first two experiments showed that the difference threshold averaged 0.9 °C/s -1.06 °C/s at a reference value of 0. 5 °C/s. For perceiving wetness, the threshold averaged 1.08 °C/s - 1.41 °C/s. The latter finding indicates that the rate the skin cools exceeds some threshold value before it is perceived as being wet. A third experiment explored the role of temperature and surface material in the perception of illusory wetness. The results showed that temperature was the more critical valuable, with ratings of perceived wetness increasing as the temperature decreased further below the baseline skin temperature. These experiments have demonstrated the effect that rates of cooling have on perceiving illusory wetness and have contributed to a better understanding of the role of surface material and temperature on perceiving wetness during static contact. These findings are relevant to simulating wetness in prosthetic devices and virtual reality environments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Process Replacement on Sheet Metal&#13;
Product Design: The Use of Steel Extrusions Versus&#13;
Formed Sheet Metal</title>
<link href="https://hdl.handle.net/1721.1/157151" rel="alternate"/>
<author>
<name>Yuan, Chenyu</name>
</author>
<id>https://hdl.handle.net/1721.1/157151</id>
<updated>2024-10-10T03:36:31Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Impact of Process Replacement on Sheet Metal&#13;
Product Design: The Use of Steel Extrusions Versus&#13;
Formed Sheet Metal
Yuan, Chenyu
The sheet metal manufacturing industry, with its rich history and legacy, continues to seek innovative methods to enhance automation and reduce costs in an increasingly competitive market. Design for Manufacturability &amp; Assembly (DFMA) has emerged as a strategy to simplify product designs, thereby improving manufacturing eOiciency and reducing production costs. This research suggests the use of extruded steel profiles as an alternative to traditional sheet metal components that pose challenges for automation, particularly heavy gauge narrow channels. Additionally, it advocates for replacing manual press brake operations with advanced automated tube laser technology. The proposed shift not only simplifies the manufacturing process but also aligns with the broader goal of global cost reduction and process standardization, which are essential for enhancing New Product Introduction (NPI) eOiciencies. The findings demonstrate that maximizing the application of tube laser technology across a diverse range of channels and products can lead to significant cost savings, ranging from 49% to 79%, with a payback period of less than two years. Even under fluctuating raw material prices, the tube laser method remains economically advantageous. Moreover, redesigning products to enhance compatibility with tube laser technology has shown to increase the automation compatibility of an example product to 100%, underscoring the importance of incorporating DFMA principles from the early stages of product design.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Roll Form Bending Processes through Experimentation and Informed Predictive Analysis: A Strategic Approach to Optimize Tooling</title>
<link href="https://hdl.handle.net/1721.1/157150" rel="alternate"/>
<author>
<name>Kompella, Sarvagnya</name>
</author>
<id>https://hdl.handle.net/1721.1/157150</id>
<updated>2024-10-10T03:45:54Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Enhancing Roll Form Bending Processes through Experimentation and Informed Predictive Analysis: A Strategic Approach to Optimize Tooling
Kompella, Sarvagnya
Sheet metal roll forming is a continuous bending process where metal strips pass through a series of rolls to achieve a specific cross-sectional profile. This technique is crucial in the automotive industry for producing high-strength, lightweight components with precision, consistency, and cost-effectiveness. This project aims to optimize Novelis’s aluminum roll forming process by employing Computer-Aided Engineering (CAE) tools, including UBECO Profil, AutoCAD, and Finite Element Analysis (FEA) software such as LS-DYNA. Initial simulations of a square tube profile identified key stations and led to performance enhancements through targeted adjustments. Stress and strain analyses demonstrated how operational factors, such as roll settings, influence section shapes and angles, facilitating the fine-tuning of roll forming station parameters. Using a Design of Experiments (DOE) framework, the study pinpointed critical factors to improve simulation accuracy and optimize roll forming settings. The results indicated that optimized stand height settings significantly improved the accuracy of the desired angles. These insights can be integrated within Novelis’ production line to boost production efficiency and roll performance. This research not only supports current operations, but also provides a foundation for future advancements in roll forming technology.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Refining Hardware of Desktop Fiber Extrusion Devices&#13;
for Affordable Manufacturing and Novel Fiber Prototyping</title>
<link href="https://hdl.handle.net/1721.1/157149" rel="alternate"/>
<author>
<name>Glasser, Kaili</name>
</author>
<id>https://hdl.handle.net/1721.1/157149</id>
<updated>2024-10-10T03:01:45Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Refining Hardware of Desktop Fiber Extrusion Devices&#13;
for Affordable Manufacturing and Novel Fiber Prototyping
Glasser, Kaili
The Fiber Extrusion Device (FrED) is a hands-on desktop tool designed to facilitate the teaching of manufacturing engineering concepts through remote laboratory experiences. FrED simulates the continuous fiber draw process used in various industries, including fiber optics, synthetic textiles, medical devices, aerospace, and construction. This device translates industrial-scale fiber draw towers into a compact version, allowing users to experiment with different parameters to understand their effects on manufacturing processes. Over the past three years, successive groups of MEng students have refined FrED’s design with the goal of creating a robust, functional, and affordable device for in-house manufacturing at the MIT FrED Factory. While the 2023 model achieved significant cost reduction, it required further hardware and electronics refinement for stable and repeatable performance. This thesis encompasses two main objectives: enhancing the hardware design and assembly processes for the final 2024 educational FrED model, and developing an alternative design for an advanced FrED version suitable for academic lab settings to rapidly prototype synthetic fibers. The first objective was met by improving the two most dynamic sub-assemblies—the gearbox and extrusion system—to ensure smooth and consistent operation. Additionally, conjoining part tolerances and hardware insert locations and geometries within manufactured parts were verified and adjusted according to manufacturing standards. Multiple jigs were also designed and fabricated to facilitate the assembly process of the gearbox and extrusion sub-assemblies, and two new parts were created to enhance user operation of FrED. For the second objective, an enhanced version of FrED capable of handling a wider range of preform materials was developed by upgrading the extrusion sub-assembly to operate at temperatures over three times higher than the educational version. This feature had been previously attempted with older, more expensive versions of FrED but had not been pursued with the recent, more affordable iteration. The new high-temperature FrED successfully drew fibers from PLA, a biodegradable thermoplastic, using 3D printed preforms with distinctive geometries, demonstrating its potential for providing an affordable solution for rapid synthetic fiber prototyping in academic labs.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Internet Celebrity City: Social Media and Urban Space in China</title>
<link href="https://hdl.handle.net/1721.1/157148" rel="alternate"/>
<author>
<name>Chen, Yufei</name>
</author>
<id>https://hdl.handle.net/1721.1/157148</id>
<updated>2024-10-10T03:13:39Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Exploring the Internet Celebrity City: Social Media and Urban Space in China
Chen, Yufei
“Internet celebrity space” offers a fresh perspective for studying urban spaces in the mobile Internet era as a new visual consumption space. The term "Internet celebrity," or wanghong in Chinese, is utilized in modern Chinese media to refer to celebrities and the specific cultural and consumption trends linked to them. This concept has surfaced alongside the growth of e-commerce platforms, with the recognition that wanghong often engages in promoting products, services, or lifestyles to their followers. The internet celebrity spaces, or wanghong spaces, can elevate the popularity of certain areas and influence local neighborhoods, communities, and economies. Internet celebrity urbanism involves broadening this trend from certain locations to greater scales, encompassing entire districts or extending this status through urban scale. This thesis explores the impact of internet celebrity spaces in China. It is divided into three parts: Firstly, it demonstrates the phenomenon and background: study investigates the way Internet Celebrity spaces are represented in social media. Then, the studies focus on exploring the latest research and analyzing the research perspectives and methods to anchor the author’s research questions with appropriate approaches. Lastly, the influence of Internet Celebrity spaces is discussed through examining the case in Shanghai by observing internet celebrity spaces’ influence on street activity. With the analysis and conclusion, suggestions for future development are given.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Target Design and Optimizations for Spent Fuel Transmutation</title>
<link href="https://hdl.handle.net/1721.1/157147" rel="alternate"/>
<author>
<name>Tukharyan, Grigor</name>
</author>
<id>https://hdl.handle.net/1721.1/157147</id>
<updated>2024-10-10T03:04:56Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Target Design and Optimizations for Spent Fuel Transmutation
Tukharyan, Grigor
There are six long-lived fission products (LLFPs) identified in nuclear spent fuel, which account for at least 99% of the long-term radiotoxicity once actinide recycling is completed. This thesis examines the feasibility of using proton beams to transmute LLFPs into shorterlived or stable isotopes. While long-term storage for high-level waste would still be necessary, transmuting the LLFPs can reduce the volume of waste material that needs to be stored. The objectives of this research are to explore the design of a proton transmutation facility, as well as to determine the optimal LLFP target-blanket material configuration for maximizing the transmutation efficiency. This thesis analyzes the use of intermediate energy beams of 18-70 MeV from commercial cyclotrons for transmutation. This thesis also analyzes the use of 1000 MeV proton beams to generate a substantial number of secondary neutrons through spallation interactions with target materials. The secondary neutrons produced from the spallation process are utilized by the LLFP materials, while surrounding blanket materials are selected to enhance the transmutation efficiency. PHITS, a Monte Carlo transport code, is employed to computationally model the interactions between LLFP materials and the proton beam. In this thesis, PHITS is used to estimate the flux-energy spectrum and the number of atoms irradiated in the LLFP target during beam interaction. This data is then post-processed using a 0-dimensional analysis in FISPACT to estimate the transmutation rate for each LLFP. PHITS is also used to find the depletion rate of the LLFPs for the 18-70 MeV beam case and for spallation-induced transmutations in the 1000 MeV case. Geant4, a Monte Carlo transport toolkit, is used to calculate the production rate of particles attributed to the spallation process. Analysis of the performance of commercial cyclotrons with energies of 18-70 MeV indicates that transmutation rates increase with higher proton beam energy. A cyclotron with a beam current of 10 mA and beam energy of 70 MeV running continuously can transmute 15.401 ± 0.069 g/year of Tc-99. However, Tc-99 is produced at a rate of approximately 8.54 kg/year in a 1 GW reactor, suggesting that a single commercial cyclotron beam is currently not viable for transmutation purposes. A proposed tank design with a lead/Tc-99 target that is surrounded by LLFP pins and heavy water is considered for the spallation study. Although using Tc-99 as a target directly transmutes 0.893 ± 0.002 kg/year from transmutation attributed to spallation, using lead as a target instead approximately doubles the transmutation rates in the LLFP regions for almost all of the LLFP isotopes. In both cases, the depletion rate of the LLFPs is greatly increased compared to using a commercial cyclotron of 70 MeV. A proton spallation source 3 with a beam current of 10 mA and beam energy of 1000 MeV, using a Tc-99 target, achieves a transmutation rate of approximately 10.9 kg/year of Tc-99 in the LLFP pins through secondary neutrons produced by the spallation process. In contrast, using a lead target achieves a higher transmutation rate of around 20.0 kg/year of Tc-99 in the LLFP pins. This work was supported by the DOE ARPA-E Project under the award number DEAR0001578.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Singular Value Decomposition Through&#13;
Least Squares</title>
<link href="https://hdl.handle.net/1721.1/157145" rel="alternate"/>
<author>
<name>Zhao, Freddie</name>
</author>
<id>https://hdl.handle.net/1721.1/157145</id>
<updated>2024-10-10T03:04:26Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Distributed Singular Value Decomposition Through&#13;
Least Squares
Zhao, Freddie
Singular value decomposition (SVD) is an essential matrix factorization technique that decomposes a matrix into singular values and corresponding singular vectors that form orthonormal bases. SVD has wide-ranging applications from principal component analysis (PCA) to matrix completion and approximation. Methods for computing the SVD of a matrix are extensive and involve optimization algorithms with some theoretical guarantees, though many of these techniques are not scalable in nature. We show the efficacy of a distributed stochastic gradient descent algorithm by implementing parallelized alternating least squares and prove theoretical guarantees for its convergence and empirical results, which allow for the development of a simple framework for solving SVD in a correct, scalable, and easily optimizable manner.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Survey Techniques to Examine Morphological Evolution of Coastal Regions</title>
<link href="https://hdl.handle.net/1721.1/157143" rel="alternate"/>
<author>
<name>Ammons, Seth N.</name>
</author>
<id>https://hdl.handle.net/1721.1/157143</id>
<updated>2024-10-10T03:01:26Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Survey Techniques to Examine Morphological Evolution of Coastal Regions
Ammons, Seth N.
Beaches are dynamic, changing with tides, winds, and waves. Here, a beach was mapped daily for 3 weeks from the dune to the low-tide water line on the Outer Banks of North Carolina at the US Army Corps of Engineers Field Research Facility in Duck. The 22,500 m2 area of interest was surveyed daily by a walker carrying a GPS-equipped backpack and occasionally with a lidar equipped drone. Surveys of the northern region of interest also were collected with a stationary terrestrial lidar mounted on the dune. The observed morphological events include the destruction and formation of a cusp field during which there was 1.4 m of erosion and accretion associated with bays and horns, and the formation over 7 days of a ~1-m high ridge and runnel system. The GPS-equipped backpack apparatus was used as ground truth for estimates made with the lidar systems. Along both cross- and alongshore transects the lidar elevations were within approximately 0.05 m of those estimated by the backpack surveys, with RMS errors less than 0.11 m.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Engineering for Carbon Capture and Storage</title>
<link href="https://hdl.handle.net/1721.1/157140" rel="alternate"/>
<author>
<name>Zhang, Tiantian</name>
</author>
<id>https://hdl.handle.net/1721.1/157140</id>
<updated>2024-10-10T03:32:46Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Systems Engineering for Carbon Capture and Storage
Zhang, Tiantian
Carbon Capture and Storage (CCS) is a crucial technology in the mission to achieve NetZero carbon emissions by midcentury. By capturing and storing CO2 from large industrial sources and power plants, CCS mitigates the impact of existing industrial activities while maintaining energy security and economic stability. The study underscores the necessity of a systematic approach to CCS system design and development to meet stakeholder requirements. It highlights the versatility of CCS in addressing emissions across various sectors, its ability to be retrofitted to existing infrastructure, and its potential for immediate emissions reduction compared to the longer timelines required for integrating renewable energy sources.&#13;
This study analyzes CCS systems holistically, identifying primary components and alternative options for capture, transport, storage, and utilization. It reveals that the transport type significantly impacts system utility, with pipelines being the most effective. The analysis also indicates that CCS systems capturing CO2 from power plants, ammonia, and chemical production facilities and utilizing onshore pipelines and saline aquifers offer high utility and low cost. The Gulf Coast and Permian &amp; Midcontinent regions show better performance due to existing infrastructure and storage capacity. The study emphasizes the benefits of staged CCS development for broader deployment, technology maturation, and cost recovery. Sensitivity analyses suggest that future technology advances could further improve CCS system performance and economic viability.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redefining Urban Landscapes: A Methodological Approach to Transforming Underused Parking Spaces with Dynamic Urban Functions</title>
<link href="https://hdl.handle.net/1721.1/157139" rel="alternate"/>
<author>
<name>Fan, Jie</name>
</author>
<id>https://hdl.handle.net/1721.1/157139</id>
<updated>2024-10-10T03:41:52Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Redefining Urban Landscapes: A Methodological Approach to Transforming Underused Parking Spaces with Dynamic Urban Functions
Fan, Jie
This study presents an approach to identifying underutilized urban spaces, focusing on parking areas, and explores potential reutilization strategies in Greater Boston. Under the milieu of the information age, global urbanization, and technological development, the prosperity of urban data serves as the new method to approach urban proposals. The city, as a multifaceted artifact, is examined through the lens of advanced data-driven techniques, particularly deep learning. With the computer vision model, the underused surface parking lots will be automatically detected according to historical satellite imageries, highlighting a misalignment between the current infrastructure and the actual urban needs. This study then leverages miscellaneous urban factors to analyze the parking patterns. Associated with the multimodal system, there are possibilities underlying the usage of redundant surface parking. Considering the high rents and housing situation, these spaces could be transformed into housing units or even mixed-used districts, to alleviate the housing crisis in Greater Boston.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Capture to Storage: Understanding the Viability&#13;
and Challenges of Carbon Capture and Sequestration&#13;
Initiatives</title>
<link href="https://hdl.handle.net/1721.1/157138" rel="alternate"/>
<author>
<name>James, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/157138</id>
<updated>2024-10-10T04:09:11Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Capture to Storage: Understanding the Viability&#13;
and Challenges of Carbon Capture and Sequestration&#13;
Initiatives
James, Lauren
This thesis explores the implementation of Carbon Capture and Sequestration (CCS) technologies, focusing on the stages of capture, transportation, and sequestration. Utilizing a system dynamics model, the research evaluates CCS's effectiveness and economic viability across various scenarios, including those outlined by the International Energy Agency (IEA). The baseline model suggests that even under favorable assumptions, CCS permanently sequesters only a small fraction of total global emissions.&#13;
&#13;
The economic analysis reveals a slight decrease in total costs, attributed to the learning curve, but offset by increasing costs as more complex projects are undertaken. The model also highlights the energy penalty associated with high energy requirements for capture. Additionally, the alignment of capacities across capture, transportation, and sequestration phases is important because discrepancies can lead to inefficiencies and bottlenecks.&#13;
&#13;
This research acknowledges limitations, including the use of aggregated data and assumptions across many parameters. These limitations emphasize the need for further research to refine these estimates and enhance the model's accuracy. Despite these challenges, the model serves as a beneficial tool for testing policy interventions and assessing the potential of CCS as a component of global climate strategy.&#13;
&#13;
Overall, the findings highlight the complexities and challenges of deploying CCS technologies at scale, emphasizing the importance of coordinated policy, technological innovation, and infrastructure development. This research provides a foundation for future studies and policy discussions to better understand CCS's role in achieving climate goals.&#13;
&#13;
Disclosure: The following content is the author’s, and responsibility is taken for all content. Noting this, it was generated by the author with the assistance of an AI-based system to&#13;
augment the effort.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technoeconomic Analysis of Geothermal District Heating&#13;
in the Boston, MA area.</title>
<link href="https://hdl.handle.net/1721.1/157136" rel="alternate"/>
<author>
<name>Estep, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/157136</id>
<updated>2024-10-10T03:27:22Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Technoeconomic Analysis of Geothermal District Heating&#13;
in the Boston, MA area.
Estep, Joseph
This study conducts a comprehensive technoeconomic analysis of geothermal district heating (GDH) in the Boston, MA area, with a specific focus on the MIT campus. The research begins by reviewing the evolution of district energy systems, highlighting various use cases, technologies, and policy developments. It then defines the system problem and establishes a framework for implementing a geothermal district heating system at MIT. The analysis examines the economic viability and decarbonization potential of the GDH system, identifying various system architectures and phased campus sector implementation scenarios. These scenarios are compared to a 'business as usual' reference case. The study reveals that the recommended implementation scenario, MG-E-N-W, not only offers the lowest cost but also achieves the lowest emissions. Over a 30-year period, this scenario presents a net present value (NPV) savings of more than $700 million and 2 million MTCO2e compared to the reference case, making it the most economically and environmentally favorable option for MIT's campus energy system transformation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Location, Location, Substation? How Battery Energy Storage Systems (BESS) Can Create Value in Unexpected Places</title>
<link href="https://hdl.handle.net/1721.1/157127" rel="alternate"/>
<author>
<name>Schutt, Neal</name>
</author>
<id>https://hdl.handle.net/1721.1/157127</id>
<updated>2024-10-03T03:53:09Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Location, Location, Substation? How Battery Energy Storage Systems (BESS) Can Create Value in Unexpected Places
Schutt, Neal
The transition to renewable energy is a critical step in reducing global carbon emissions, yet it introduces new challenges for the aging electrical grid, particularly in urban areas. Battery Energy Storage Systems (BESS) are emerging as key infrastructure in this transition, capable of enhancing grid resiliency, managing peak loads, and facilitating the integration of renewable energy sources. Federal and state incentives and a recent sharp decline in the cost of battery cells have made BESS development economically viable. This thesis explores the potential of BESS to create public and economic value in underutilized urban spaces through the exploration of a hypothetical redevelopment proposal for the Alewife MBTA Complex in Cambridge, Massachusetts.&#13;
&#13;
The Alewife MBTA Complex presents significant challenges for redevelopment due to the high cost of demolishing the decaying existing structure. However, its proximity to a major substation and the increasing local demand for electricity make it an ideal candidate for a BESS project. This thesis demonstrates how integrating energy storage into the redevelopment of the site can enable an otherwise financially infeasible project.&#13;
&#13;
The paper provides an overview of the BESS development process, detailing each phase from creating a business strategy to disposition. It offers insights into the common challenges encountered, and how these might be navigated to optimize project outcomes. By breaking down the development timeline and key decision points, this thesis serves as a practical guide for real estate professionals to gain familiarity with Battery Energy Storage Systems. &#13;
&#13;
Through detailed financial modeling and analysis, including sensitivity testing, this research quantifies the expected financial performance of a BESS project at the Alewife site. The study concludes that BESS can unlock ‘found value’ in sites with little other economic potential. The findings suggest that incorporating BESS into real estate development projects can provide substantial public benefits, including enhanced grid resilience, lower energy costs, and increased property values, making it a strategic tool for urban planners and developers.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Severity of a Cybersecurity Incident for Incident Reporting</title>
<link href="https://hdl.handle.net/1721.1/157124" rel="alternate"/>
<author>
<name>Conard, Chelsea Foushee</name>
</author>
<id>https://hdl.handle.net/1721.1/157124</id>
<updated>2024-10-03T03:47:03Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Quantifying the Severity of a Cybersecurity Incident for Incident Reporting
Conard, Chelsea Foushee
In the field of cybersecurity, the lack of standardized data collection and incident reporting&#13;
methods pose significant challenges to address and respond to incidents affecting critical&#13;
infrastructure. Various initiatives aim to resolve this issue by mandating the collection of&#13;
data on cyber incidents; however, there is often a lack of clear guidelines on how the collected&#13;
data will be utilized effectively.&#13;
This paper introduces the Cyber Incident Severity Scale (CISS), a framework designed&#13;
to guide the selection of relevant data for analysis and communicate the severity of a cybersecurity incident. By drawing insights from established scales in other fields, such as&#13;
natural disasters and public health, this research produces a single score for a reporting&#13;
entity which can be aggregated to determine the overall severity of an incident. The ability&#13;
to swiftly assess and score an incident is a critical tool to quantify incident severity and&#13;
prioritize response, support policy development, and bolster the overall security of critical&#13;
infrastructure.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond the Ovaries: Renaming a common yet neglected hormonal condition could be the key to unlocking better care for patients</title>
<link href="https://hdl.handle.net/1721.1/157123" rel="alternate"/>
<author>
<name>Stewart, Lily</name>
</author>
<id>https://hdl.handle.net/1721.1/157123</id>
<updated>2024-11-14T17:04:22Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Beyond the Ovaries: Renaming a common yet neglected hormonal condition could be the key to unlocking better care for patients
Stewart, Lily
PCOS is a common hormonal condition found in 10 to 19 percent of people with ovaries. It frequently causes irregular periods and ovulation and is one of the most common forms of female infertility. However, the effects do not stop there. People with PCOS are at higher risk for a slew of health complications: insulin resistance, sleep apnea, depression, and anxiety. They are also more likely to develop metabolic syndrome—a combination of high cholesterol, high blood pressure, diabetes, and high waist-to-hip ratios. Together, many of these symptoms are risk factors for fatty liver disease or heart attacks and strokes. &#13;
&#13;
Despite the commonness and potential seriousness of the condition, many patients go undiagnosed, and those with diagnoses frequently go under-treated. The reasons for this are aplenty. PCOS’s cause is unknown. It has no known cure. It looks different from patient to patient. Its research is underfunded. Physicians do not learn much about it in medical school. &#13;
&#13;
But one reason at the root of it all, some experts say, is how tightly this condition has been intertwined with reproduction and fertility. Over the past decade, researchers and physicians who specialize in the condition have been pushing for everyone to recognize PCOS for what it is: a full-body endocrine syndrome with wide-reaching effects on health and quality of life. And one way to combat these is to change something fundamental about the condition: its name.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trouble on the Range: When Does a National Park Become a Bison Zoo?</title>
<link href="https://hdl.handle.net/1721.1/157119" rel="alternate"/>
<author>
<name>Hartley, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/157119</id>
<updated>2024-11-14T16:59:43Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Trouble on the Range: When Does a National Park Become a Bison Zoo?
Hartley, Sophia
Yellowstone National Park is often credited for bringing American bison back from the brink of extinction. In 1902, there were merely 25 individual bison in the park, but now, Yellowstone’s herd fluctuates between 3,000 and 5,500 animals. Over the past century, the national park’s conservation effort pushed bison into the public spotlight. The animal has become a symbol of the great American west, and recently, bison were named the US National mammal.&#13;
&#13;
Many of Yellowstone National Park’s bison reside in the park’s northern range, a 380,000-acre network of valleys, mountains, and river basins. One of these valleys, Lamar, is a hotspot for bison viewing, but, unbeknownst to many casual tourists, the area has also long-been the center of an intense scientific debate. &#13;
&#13;
Before thousands of bison covered the floor of Lamar Valley, a different hooved mammal stood in their place. Over the 19th and 20th centuries, hunting pressure, federal policy, and unnatural predator-prey relationships made Yellowstone’s northern range a haven for elk herds. As they proliferated in peace, elk chewed through the northern range’s preexisting ecosystems. Their appetites took a severe toll on native flora, which in turn, shrank habitats for other wildlife. Debates about park management and range science broke out between independent scientists and Yellowstone officials. The disagreements lasted for decades. But in the late 1990s, a whirlwind of decisions reduced (and maintained) elk herds to a more manageable level. Scientists thought that finally, the northern range’s native flora and fauna might have a chance to recover. &#13;
&#13;
For many years, it seemed like an ecological revival was beginning. But not in some places. Regrowth in regions of the northern range where bison heavily grazed were lagging behind. A growing body of research suggests that bison are having a similar adverse effect on Yellowstone’s ecosystems as the historic overabundance of elk. In Lamar Valley, many riverbanks are still devoid of trees, beavers are few and far between, and non-native species are increasingly prevalent. &#13;
&#13;
Yellowstone officials disagree with this consensus. Instead, they point to research showing how bison positively impact the landscape. In 2023, the park released a bison management proposal that has only intensified the debate. The proposal dismissed a large body of research as insignificant, going on to suggest an increase to the size of the park’s bison herd. In addition to concern about ecological degradation, many independent researchers are perplexed as to why Yellowstone — the world’s first national park — is seemingly intent on diminishing or ignoring the significance of legitimate scientific research.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Astrophysical Simulations with GPUs: A Case Study of Radiative Transfer in arepo-rt</title>
<link href="https://hdl.handle.net/1721.1/157117" rel="alternate"/>
<author>
<name>Verbeek, Erkin Emiel</name>
</author>
<id>https://hdl.handle.net/1721.1/157117</id>
<updated>2024-10-03T03:02:52Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Accelerating Astrophysical Simulations with GPUs: A Case Study of Radiative Transfer in arepo-rt
Verbeek, Erkin Emiel
Radiative transfer (RT) is a crucial ingredient for self-consistent modelling of numerous astrophysical phenomena across cosmic history. However, on-the-fly integration into radiation-hydrodynamics (RHD) simulations is computationally demanding, particularly due to the stringent time-stepping conditions and increased dimensionality inherent in multifrequency collisionless Boltzmann physics. The recent emergence of exascale supercomputers, equipped with extensive CPU cores and GPU accelerators, offers new opportunities for enhancing RHD simulations. We present the first steps towards optimizing the RHD solver AREPO-RT for such high-performance computing environments. We implement a novel node-to-node communication strategy that utilizes shared memory to substitute intranode communication with direct memory access. Furthermore, combining multiple internode messages into a single message substantially enhances network bandwidth utilization and performance for large-scale simulations on modern supercomputers. The single-message node-to-node approach also improves performance on smaller-scale machines with less optimized networks. Additionally, by transitioning all RT-related calculations to GPUs, we achieve a significant computational speedup of around 15x for standard benchmarks compared to the original CPU implementation. As a case study, we perform cosmological RHD simulations of the Epoch of Reionization, employing a similar setup as the THESAN project. In this context, RT becomes sub-dominant such that even without modifying the core AREPO codebase, there is an overall threefold improvement in efficiency. The advancements presented here have broad implications, potentially transforming the complexity and scalability of future simulations for a wide variety of astrophysical studies. This work serves as a blueprint for porting similar simulation codes based on unstructured resolution elements to GPU-centric architectures.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kent Kiehl’s Search for the Criminal Brain America’s self-proclaimed “psychopath whisperer” says he can predict criminality in incarcerated people. Is the legal system buying it?</title>
<link href="https://hdl.handle.net/1721.1/157116" rel="alternate"/>
<author>
<name>Hopkins, Sarah Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/157116</id>
<updated>2024-11-14T17:02:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Kent Kiehl’s Search for the Criminal Brain America’s self-proclaimed “psychopath whisperer” says he can predict criminality in incarcerated people. Is the legal system buying it?
Hopkins, Sarah Rebecca
Since the 19th century, researchers have attempted to uncover the biological roots of criminality. The process has been both scientifically dubious and ethically fraught. While biological theories of criminal behavior faded after World War II, they arose again in the 1990s and early 2000s, when new brain imaging techniques collided with a growing interest in understanding how biological drivers of crime, if they exist, could be analyzed to understand, and even predict, criminal behavior. This thesis examines the research and claims of a prominent neuropsychologist within that historical context. He claims to have conducted promising brain research on incarcerated people that could uncover biological markers of criminal behavior, or even predict future criminality. Yet methodological and ethical questions have been raised about his research. Is it scientifically valid to have a brain-based view of criminal behavior? Is it ethically valid to assume that criminal behavior can be decoded from the brains of people incarcerated in a system that disproportionately impacts people of color and those from low socio-economic backgrounds? His critics are doubtful.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nipah: The history, and future, of one of the world’s most lethal viruses</title>
<link href="https://hdl.handle.net/1721.1/157115" rel="alternate"/>
<author>
<name>Viveros, Alex Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/157115</id>
<updated>2024-11-14T17:02:05Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Nipah: The history, and future, of one of the world’s most lethal viruses
Viveros, Alex Gabriel
The Nipah virus kills around three quarters of people who contract it, making it one of the most lethal viruses known to infect humans. The virus first emerged in 1998, when hundreds of pig farmers in Malaysia fell ill with fevers and encephalitis, or brain inflammation. Nipah has caused smaller outbreaks in nearby Bangladesh nearly every year since then. The Malaysian farmers appeared to have been infected directly from their pigs, rather than from each other. For a time, there was no clear evidence that Nipah could spread from humans to other humans. That changed in April of 2004, when investigators responding to a Nipah outbreak in a remote district in Bangladesh discovered that the virus was spreading person to person. Pteropus fruit bats, which are native to South Asia, were identified as the natural reservoirs of the Nipah virus. Researchers have spent the last two decades studying the virus’ transmission in bats and how the virus spills over into humans. Institutions across the world have even recently started developing Nipah vaccines. Scientists believe the Nipah strains that currently circulate in humans are likely not transmissible enough to ignite a pandemic in people. That could change. Whether the virus one day evolves to spread better within humans, or hits a particularly susceptible place and thrives, officials worry about what could happen if Nipah ever affects larger populations. The Nipah virus is just one of many zoonotic pathogens that scientists are studying to understand how humanity can prepare for future deadly pathogens.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Contours of the Cloud: Dissecting the Real Estate Investment Decisions of Data Center Operators</title>
<link href="https://hdl.handle.net/1721.1/157114" rel="alternate"/>
<author>
<name>Fawcett, Robert Logan</name>
</author>
<id>https://hdl.handle.net/1721.1/157114</id>
<updated>2024-10-03T03:22:56Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Contours of the Cloud: Dissecting the Real Estate Investment Decisions of Data Center Operators
Fawcett, Robert Logan
This thesis investigates the real estate investment decisions of data center operators, with a focus on how key infrastructure characteristics influence data center development. Using a sequential econometric approach, the research applies both a logit and a hedonic model to evaluate the importance of various factors. The logit model explores the likelihood of data center development at the county level, highlighting geographical characteristics. The hedonic model examines the impact of specific site attributes, such as proximity to power infrastructure and fiber, on the scale of data center facilities in megawatts. The findings suggest that colocation data centers prioritize connectivity, electrical infrastructure, and urban proximity, while the location of hyperscale facilities is more variable and less predictable. This study enhances our understanding of how modern technological demands, particularly in the AI era, shape real estate strategies and offers insights into future trends in digital infrastructure investments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis of Isotopically Labeled Fe and S-alkylated Iron-Sulfur Clusters</title>
<link href="https://hdl.handle.net/1721.1/157112" rel="alternate"/>
<author>
<name>Linn, Brittany</name>
</author>
<id>https://hdl.handle.net/1721.1/157112</id>
<updated>2024-10-03T03:04:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Synthesis of Isotopically Labeled Fe and S-alkylated Iron-Sulfur Clusters
Linn, Brittany
Radical S-adenosylmethionine (SAM) enzymes (RS enzymes) use a 3:1 site-differentiated [Fe₄S₄]⁺ cluster to reductively cleave the SAM cofactor and generate a 5’-deoxyadenosyl radical intermediate (5’-dAdo•) that regio- and stereospecifically abstracts an H-atom from the target substrate. It has been proposed that 5’-dAdo• binds to the unique Fe site before abstracting an Hatom from the substrate. However, due to the transient nature of captured reaction intermediates, their precise structures have yet to be fully elucidated and, therefore, their role in the mechanism of RS enzymes remains unclear. Our group has established reliable methods of synthesizing alkylated [Fe₄S₄] clusters that can serve as models of organometallic intermediates in RS enzyme catalysis. These clusters are competent for radical release and, upon oxidation, undergo an alkyl migration process to yield S-alkylated clusters. A cluster species containing a unique alkylated Fe site with a coordination number greater than four is likely generated in these processes, although a stable cluster of this type has yet to isolated and crystallographically characterized. This work reports the synthesis of α-²H and α-¹³C isotopically labeled Fe and S ethyl ligated [Fe₄S₄] clusters to determine their electron-nuclear hyperfine parameters by ENDOR spectroscopy. These parameters will aid in identification of alkylated [Fe₄S₄] cluster intermediates generated in biological studies. Additionally, in an attempt to synthesize an [Fe₄S₄]⁺³ cluster with a five coordinate, Fe-alkylated site, a series of benzyl and phenyl ligated clusters were prepared and analyzed by NMR and EPR spectroscopies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Drivers of Deforestation using Games on Spatial Networks</title>
<link href="https://hdl.handle.net/1721.1/157109" rel="alternate"/>
<author>
<name>Seby, Jean-Baptiste</name>
</author>
<id>https://hdl.handle.net/1721.1/157109</id>
<updated>2024-10-03T03:01:59Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Understanding Drivers of Deforestation using Games on Spatial Networks
Seby, Jean-Baptiste
As the impacts of climate change become more extensive and intense, effective actions for mitigation and adaptation become imminent. Since deforestation is a key driver of CO₂ emissions and forests constitute a crucial carbon sink, mitigating deforestation is an essential policy lever for governments. However, much of tropical deforestation results from actions of private entities that use the cleared land for activities such as palm oil tree cultivation, timber plantation, and agriculture. Often, the incentives to engage in (often illegal) deforestation within a forest concession are coupled with these activities and are also shaped by the activities in neighboring concessions. In this thesis, we focus on the problem of modeling these strategic interactions using game theory. We analyze a class of games in which agents engage in coupled activities over a spatial network and study a policy intervention to limit illegal deforestation.&#13;
&#13;
Firstly, we conduct equilibrium analysis of a game in which each agent decides the production levels of her coupled activities in the presence of network effects. Practically, these network effects are induced by spatial arrangements of concessions and their ownership structures. We consider the general case where network effects are heterogeneous, i.e. network effects influencing palm oil tree cultivation and time logging are described by different graphs. We provide a sufficient condition for existence and uniqueness of Nash equilibrium. This result follows by leveraging potential function of the game or via a general variational inequality. &#13;
&#13;
Secondly, we analyze how the spatial structure of concessions impacts the equilibrium outcome. In addition to the basic game in which each agent simultaneously engages in two activities, we consider a variation in which agents engage in one of the activities (but not both). We show that in both cases equilibrium structure can be expressed as a linear combination of weighted Bonacich centrality vectors -- a node-centrality measure that depends on the total number of walks that depart from a node (concession). Our analysis provides new insights on the drivers of illegal logging in forest regions where palm oil cultivation and timber logging are coupled.&#13;
&#13;
Thirdly, we evaluate the impact of ``edge removal’’ intervention policy in which the boundary between two neighboring concessions is monitored or a buffer is created between them. We characterize the policy of a social planner who is interested in maximally reducing the illegal production of timber. Interestingly, we identify a regime shift (or phase transition) as the local network effect and level of coupling between activities vary. This result identifies conditions for which the social planner should incentivize specialization (enforce production of palm oil trees or timber) versus diversification (allow for both palm oil trees and timber) cultivation.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Do High Street Retail Rents Align with the Economy? An Analysis of Retail Real Estate Pricing Dynamics Based on Macroeconomic Trends</title>
<link href="https://hdl.handle.net/1721.1/157106" rel="alternate"/>
<author>
<name>Xu, Yujian</name>
</author>
<id>https://hdl.handle.net/1721.1/157106</id>
<updated>2024-10-03T03:19:23Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Do High Street Retail Rents Align with the Economy? An Analysis of Retail Real Estate Pricing Dynamics Based on Macroeconomic Trends
Xu, Yujian
This study closely examines the correlation between high street retail rents and key economic indicators, specifically Consumer Price Index (CPI) and Gross Domestic Product (GDP). Utilizing data on rent levels from prominent high streets globally, the analysis incorporates these macroeconomic indicators to discern patterns and relationships. Through methodologies such as multiple linear regression and Error Correction Model (ECM), the paper aims not only to analyze how high street retail rents align with CPI and GDP but also to explore the primary factors influencing these rents. In studying high street retail properties or considering the acquisition of such properties, this methodology can be used to determine whether a high street is susceptible to macroeconomic fluctuations. If not, it may be necessary to consider the uniqueness of the area or potential risks involved.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effect of Social Information on Reliance and Efficacy&#13;
in AI-assisted Prediction</title>
<link href="https://hdl.handle.net/1721.1/157105" rel="alternate"/>
<author>
<name>Alsobay, Mohammed</name>
</author>
<id>https://hdl.handle.net/1721.1/157105</id>
<updated>2024-10-03T03:44:54Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Effect of Social Information on Reliance and Efficacy&#13;
in AI-assisted Prediction
Alsobay, Mohammed
This work addresses an under-explored aspect of people's utilization of algorithmic decision support systems: How do people perceive and use these systems under social influence? Through a pre-registered randomized human-subject experiment, I study the effect of two forms of social information-direct conversations and summarized peer decisions----on users' reliance and effectiveness in leveraging algorithmic advice across a series of decision-making tasks, and how t he availability of local model explanations and performance feedback moderates this effect. I find t hat, on average, neither form of social information affects t rust directly, yet they both moderate t he extent to which feedback and model explanations influence trust in the algorithm. However, while social information can influence trust in the algorithm, I detect no effect on how effectively people utilize algorithmic advice. By describing this interplay between social information, algorithmic transparency, and user behavior, this work contributes to recent research on collective intelligence and sociotechnical approaches to human-AI interaction.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Curve of Inflation Expectations and Firms’ Investments</title>
<link href="https://hdl.handle.net/1721.1/157104" rel="alternate"/>
<author>
<name>Perinelli, Giuditta</name>
</author>
<id>https://hdl.handle.net/1721.1/157104</id>
<updated>2024-10-03T03:35:55Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Curve of Inflation Expectations and Firms’ Investments
Perinelli, Giuditta
Using rich survey data on Italian firms, this paper studies the formation mechanisms of inflation expectations at different forecasting horizons. Starting from empirical evidence embedded in firms’ inflation expectation curve, we obtain 3 main findings: (1) firms extrapolate for long forecasting horizons, (2) inflation forecasts overreact (underreact) at long (short) forecasting horizons, (3) long-term inflation expectations impact investment decisions. Specifically, we find that a 1% wedge between the 4-year and 1-year ahead expected inflation is associated with a 0.8% increase in the probability of investing. What motivates this result? After ruling out alternative channels of (1) an increase in expected demand, (2) a decrease in supply of input goods, and (3) an improvement in financing conditions, we claim that a decrease in the perceived cost of capital is the main driver.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Seoul Apartment Prices during Population Decline Era</title>
<link href="https://hdl.handle.net/1721.1/157103" rel="alternate"/>
<author>
<name>Cho, Moohyun</name>
</author>
<id>https://hdl.handle.net/1721.1/157103</id>
<updated>2024-10-03T03:50:21Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Analysis of Seoul Apartment Prices during Population Decline Era
Cho, Moohyun
Since the early 2020s, South Korea has faced a population decrease due to the lowest birth rates globally, but the apartment prices in capital regions covering Seoul, capital city of South Korea and Gyeonggi-do, have ironically shown a consistent upward trend. This thesis explores the persistent rise in apartment prices despite diminishing population in Seoul, providing insights into the economic and social factors affecting this trend. Through an analysis of the characteristics of Seoul apartments, including the unique Jeonse system, and the impacts of population trends by region, this research demonstrates the broader implications of single person household trends and aging population demographics. Furthermore, comparative case studies from Japan and France supports the relationship between aging populations and housing markets. By applying various indices related to apartment prices, this study demonstrates the correlations between apartment prices and demographic changes, consequently exploring the potential future scenarios for the housing market in Seoul.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>No One Wants To Be A Parasitologist: The Shrinking Field of America's Least Favorite Animals</title>
<link href="https://hdl.handle.net/1721.1/157101" rel="alternate"/>
<author>
<name>Richter, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/157101</id>
<updated>2024-11-14T18:24:40Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">No One Wants To Be A Parasitologist: The Shrinking Field of America's Least Favorite Animals
Richter, Hannah
Parasites have a bad rap. Most people think of them as scary, gross, or both, but they are also diverse creatures that have evolved in and on every animal and ecosystem on the planet. Parasitism is the most successful way of life for an animal — representing more than 40% of all species — and the wormy and crawly creatures it encompasses are vastly understudied. An increasing volume of research shows that parasites play important ecological functions, from keeping animal populations in check to stabilizing food chains to driving evolution and biodiversity. While parasites can cause horrible human suffering, especially in countries without reliable clean water or sanitation systems, only a fraction of parasites affect humans, with estimates as low as 0.1%. &#13;
&#13;
As climate change and habitat loss threaten animals, so too do they endanger the parasites that live on and inside them. At the same time parasite biodiversity faces shrinkage, the field of parasitology reckons with its own crisis: membership in the American Society of Parasitologists has declined by 76% in the past 50 years, and many of the world’s most important parasitologists are elderly or dead. To revitalize the field, parasitologists are charming younger generations with parasite Pokémon cards and stuffed animals and attempting to integrate parasites into global conservation programs. One main question is on parasitologists’ minds: How can they convince people to discover, catalog, and understand the world's parasite biodiversity before parasites, the field’s leaders, and their valuable knowledge die off?
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Looking at the Map, Together: Modeling Treatment Center Location Selection and its Effects on Access to Gene Therapy in Brazil</title>
<link href="https://hdl.handle.net/1721.1/157099" rel="alternate"/>
<author>
<name>Wertheimer, Sarah R.</name>
</author>
<id>https://hdl.handle.net/1721.1/157099</id>
<updated>2024-10-03T03:34:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Looking at the Map, Together: Modeling Treatment Center Location Selection and its Effects on Access to Gene Therapy in Brazil
Wertheimer, Sarah R.
Choosing at how many and which treatment centers to offer a gene therapy to patients is&#13;
a crucial decision which impacts how far the treatment has to be transported and how far&#13;
patients have to travel to receive treatment. Many gene therapies are for patients with severe diseases that make it difficult to travel. On the other hand, cold chain requirements&#13;
make shorter transportation preferable for gene therapies, and few centers have prior experience handling them. Using multi-criteria optimization modeling paired with local input,&#13;
this thesis explores different approaches to the gene therapy treatment center location selection decision and how these approaches would affect patients’ geographic accessibility to&#13;
treatment.&#13;
We focus on Brazil and a specific gene therapy product as our case study. We interview&#13;
local pharmaceutical company employees to understand the stakeholders involved in this&#13;
decision and the approaches being considered. We model how these approaches would affect patients’ geographic accessibility to treatment and discuss potential modifications to&#13;
our model. Finally, by means of an interactive workshop, we explore the decision-making&#13;
discussion between stakeholders in choosing which approach to follow.&#13;
We find that the approaches under consideration result in a wide range of geographic accessibility for patients. Early stage decisions have impacts across stages, and even therapies,&#13;
due to a reluctance to select new locations. Patients in the northwest of Brazil would need&#13;
stakeholders to consider candidate locations beyond government reference centers or those&#13;
with gene therapy experience, in order to have a treatment center nearby. Regarding facilitation, we find that quick, low-stakes modeling and joint discussion could allow stakeholders&#13;
to consider approaches they might not otherwise consider.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Wildfire Suppression: A branch-and-price-and-cut approach</title>
<link href="https://hdl.handle.net/1721.1/157098" rel="alternate"/>
<author>
<name>Wachspress, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/157098</id>
<updated>2024-10-03T03:25:12Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Optimizing Wildfire Suppression: A branch-and-price-and-cut approach
Wachspress, Jacob
In periods of intense, synchronous wildfire activity, fire system managers must make rapid fire prioritization decisions over a disperse geographic area with limited suppression resources. This thesis defines the Wildfire Suppression and Crew Assignment Problem, which optimizes resource allocation to triage fires based on damage risk, crew availability and spatiotemporal dynamics. We formulate a two-sided set partitioning model on time-space-rest networks for crew assignments and time-state networks for fire damage, with linking constraints between both; this representation can encode a broad class of non-linear wildfire spread models and diverse suppression objectives. To solve it, we develop a two-sided column generation algorithm that generates fire suppression plans and crew routes iteratively. We embed it into a branch-and-price-and-cut algorithm to retrieve an optimal integer solution, using novel special-purpose cuts that augment generalized-upper-bound cover cuts and a novel branching rule that leverages dual information from the linking constraints. Extensive computational experiments show that the algorithm scales to practical problems that remain otherwise intractable. The optimization methodology can provide high-quality solutions by jointly optimizing wildfire triaging and crew assignments, resulting in enhanced wildfire suppression effectiveness.In periods of intense, synchronous wildfire activity, fire system managers must make rapid fire prioritization decisions over a disperse geographic area with limited suppression resources. This thesis defines the Wildfire Suppression and Crew Assignment Problem, which optimizes resource allocation to triage fires based on damage risk, crew availability and spatiotemporal dynamics. We formulate a two-sided set partitioning model on time-space-rest networks for crew assignments and time-state networks for fire damage, with linking constraints between both; this representation can encode a broad class of non-linear wildfire spread models and diverse suppression objectives. To solve it, we develop a two-sided column generation algorithm that generates fire suppression plans and crew routes iteratively. We embed it into a branch-and-price-and-cut algorithm to retrieve an optimal integer solution, using novel special-purpose cuts that augment generalized-upper-bound cover cuts and a novel branching rule that leverages dual information from the linking constraints. Extensive computational experiments show that the algorithm scales to practical problems that remain otherwise intractable. The optimization methodology can provide high-quality solutions by jointly optimizing wildfire triaging and crew assignments, resulting in enhanced wildfire suppression effectiveness.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analysis Using State Space Global Coherence of Brain Dynamics in a Young Child Under Sevoflurane General Anesthesia</title>
<link href="https://hdl.handle.net/1721.1/157097" rel="alternate"/>
<author>
<name>Gallo, Sebastian A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157097</id>
<updated>2024-10-03T03:04:29Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">An Analysis Using State Space Global Coherence of Brain Dynamics in a Young Child Under Sevoflurane General Anesthesia
Gallo, Sebastian A.
The dynamics of brain states under general anesthesia in infants are complex and exhibit significant developmental changes, particularly in the context of neurophysiological responses. Traditional EEG analysis has been valuable in tracking these changes, but there is a critical need for more precise, quantitative methods to assess neural synchrony and coherence in this vulnerable population. This thesis explores advanced state-space modeling techniques, specifically focusing on State Space Global Coherence (SSGC), to estimate global coherence (GC) during sevoflurane general anesthesia in an infant. Two different SSGC approaches were employed: one approach directly estimated GC from the data, while the other first estimated the covariance matrix and then used this matrix to compute GC. The SSGC approaches were first applied to a validation dataset that had been previously analyzed using SSGC for covariance estimation. This was done to ensure that my analysis was functioning correctly by validating it against a dataset with known outcomes before proceeding with exploratory analysis. Once this was certain, the next step involved applying this pipeline to EEG data from a 10-month-old infant—a dataset where SSGC had not been previously utilized. Following this, both the validation dataset and the infant dataset were used to compare the effectiveness of SSGC for covariance estimation versus direct GC estimation. The infant dataset, in particular, provided an opportunity to explore the utility of SSGC in a new context. Both datasets that the SSGC methods were applied to had a low signal to noise ratio. This revealed that direct GC estimation provided improved temporal resolution for GC and the ability to capture dynamic changes in coherence over time. In contrast, SSGC for covariance estimation produced results nearly identical to empirical GC, suggesting that it is more susceptible to noise. The resilience of direct GC estimation to noisy data highlights its potential as a robust tool for capturing the spatiotemporal dynamics of neural synchrony under anesthesia. This thesis emphasizes the importance of advanced modeling techniques in enhancing neurophysiological monitoring, with significant implications for improving pediatric anesthetic care and outcomes.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>State Estimation in Dynamical Robotic System with Non-Gaussian Noise</title>
<link href="https://hdl.handle.net/1721.1/157096" rel="alternate"/>
<author>
<name>Jin, David</name>
</author>
<id>https://hdl.handle.net/1721.1/157096</id>
<updated>2024-10-03T03:28:25Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">State Estimation in Dynamical Robotic System with Non-Gaussian Noise
Jin, David
State estimation is critical for robot operation. Most estimation algorithms assume that the robotic sensor measurements are contaminated by Gaussian noise. However, in practical applications, the noise is often non-Gaussian, heavy-tailed, or even multi-modal. In this thesis, we develop algorithms that perform state estimation in dynamical systems with arbitrary noise and prove their theoretical guarantees. We tackle two challenging state estimation problems: multi-model point cloud registration and state estimation in polynomial dynamical systems, both contaminated by non-Gaussian noise. In the multi-model 3D registration problem, we are given two point clouds picturing a set of objects at different poses (and possibly including points belonging to the background) and we want to simultaneously reconstruct how all objects moved between the two point clouds. We propose a simple approach based on Expectation-Maximization (EM) and establish theoretical conditions under which the EM approach recovers to the ground truth. We evaluate the approach in simulated and real datasets ranging from table-top scenes to self-driving scenarios and demonstrate its effectiveness. For state estimation in polynomial systems corrupted by arbitrary noise, we develop a new filtering approach called the Generalized Moment Kalman Filter (GMKF). The GMKF formulates the prediction and update steps as polynomial optimization problems (POP) and solves them using moment relaxations, carrying over a possibly non-Gaussian belief. In the linear-Gaussian case, GMKF reduces to the standard Kalman Filter. We demonstrate that GMKF performs well under highly non-Gaussian noise and outperforms common alternatives, including the Extended and Unscented Kalman Filter, and their variants on matrix Lie groups. We also showcase applications to challenging landmark-based and lidar-based robot localization problems.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Phight for Phage: Understanding Bacteriophage Therapy in Aquaculture and Human Health</title>
<link href="https://hdl.handle.net/1721.1/157095" rel="alternate"/>
<author>
<name>Cornman, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/157095</id>
<updated>2024-11-14T17:01:20Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">The Phight for Phage: Understanding Bacteriophage Therapy in Aquaculture and Human Health
Cornman, Eva
In the wake of the antibiotic resistance crisis, alternative options to prevent and treat bacterial infections are desperately needed. Researchers across the world are turning to the most abundant &#13;
biological particle on our planet: bacteriophage. Often called phage, these microscopic viruses infect bacteria, and their high specificity and incredible abundance may make them viable treatment options. Scientists have known about phage for over a century, but renewed interest over the past few decades has spurred a wide variety of research into the biology and applications of these viruses. The benefits, and some of the challenges, of phage therapy for both &#13;
aquaculture and human health are discussed here.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing a Tiled Singular Value Decomposition: A Framework for Tiled Linear Algebra in Julia</title>
<link href="https://hdl.handle.net/1721.1/157092" rel="alternate"/>
<author>
<name>Ringoot, Evelyne</name>
</author>
<id>https://hdl.handle.net/1721.1/157092</id>
<updated>2024-10-03T03:07:39Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Implementing a Tiled Singular Value Decomposition: A Framework for Tiled Linear Algebra in Julia
Ringoot, Evelyne
High-performance computing (HPC) is essential for scientific research, enabling complex simulations and analyses across various fields. However, the specialized knowledge required to utilize HPC effectively can be a barrier for many scientists. This work introduces a hardware-agnostic, large-scale tiled linear algebra framework in Julia designed to enhance accessibility and usability without compromising performance. By providing a flexible abstraction layer, the framework simplifies the development and testing of new algorithms across diverse computing architectures. Julia language’s multiple-dispatch and type inference facilitate the development of type-agnostic, hardware-agnostic, and multi-use frameworks by allowing composability. Utilizing a tiled approach, the implemented framework improves data locality, parallelism, and scalability, making it well-suited for modern heterogeneous environments. Its practical benefits are demonstrated through the implementation of tiled QR-based singular value decomposition (SVD), demonstrating how it streamlines the development process and accelerates scientific discovery. The developed framework is used to implement an in-GPU tiled SVD and an out-of-core GPU-accelerated SVD. Furthermore, its extensibility is demonstrated by implementing a tiled QR algorithm. This work aims to democratize HPC resources by bridging the gap between advanced computational capabilities and user accessibility, empowering a broader range of scientists to fully leverage modern computing technologies.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Opinion Dynamics to Collective Action: How Identity-Based Tolerance Leads to Political Extremism</title>
<link href="https://hdl.handle.net/1721.1/157090" rel="alternate"/>
<author>
<name>Liang, Chen E.</name>
</author>
<id>https://hdl.handle.net/1721.1/157090</id>
<updated>2024-10-03T03:41:38Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">From Opinion Dynamics to Collective Action: How Identity-Based Tolerance Leads to Political Extremism
Liang, Chen E.
Current sociological theories attribute the recent surge in political extremism to mechanisms of opinion “homophily” (i.e., like-minded individuals interact more while dissimilar ones might distance) and “assimilation (i.e., interactions homogenize opinions),” which collectively suggest a social world dominated by extreme views. Yet, this view contradicts empirical evidence showing that extremists still represent a minority and individual opinions remain largely stable. We resolve this apparent paradox by illustrating how extreme collective action can arise from a moderate majority that retains moderate opinions yet responds positively to recruitment by extremists. We break down this task into three steps. First, we theoretically distinguish between opinion homophily and identity homophily (i.e., individuals who share the same identity interact more). Second, we develop an agent-based model to manipulate the strength of identity homophily relative to opinion homophily, while excluding the effect of assimilation (i.e., holding opinions constant). Our model reveals that strong identity-based tolerance can create a “radicalized” structure, which allows extremists and moderates–who disagree in opinion but share an identity–to maintain stable relationships in emergent clusters; Further, the structure concentrates extremists at the center of the clusters, enabling them to form a critical mass that enlists a broader population. Finally, beyond confirming our expectations, we uncover unexpected model behaviors by exploring how the "radicalized" structure can transition between three other distinct structures the model generates. We show that homogeneous groups, often seen as indicators of polarization, could paradoxically be key to reducing organized extremism when dominated by moderates who can effectively mobilize collective action while marginalizing extremists.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Social Science: Language Models as Scientist and Subjects</title>
<link href="https://hdl.handle.net/1721.1/157089" rel="alternate"/>
<author>
<name>Manning, Benjamin S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157089</id>
<updated>2024-10-03T03:42:42Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Automated Social Science: Language Models as Scientist and Subjects
Manning, Benjamin S.
We present an approach for automatically generating and testing, in silico social scientific hypotheses. This automation is made possible by recent advances in large language models (LLM), but the key feature of the approach is the use of structural causal models. Structural causal models provide a language to state hypotheses, a blueprint for constructing LLM-based agents, an experimental design, and a plan for data analysis. The fitted structural causal model becomes an object available for prediction or the planning of follow-on experiments. We demonstrate the approach with several scenarios: a negotiation, a bail hearing, a job interview, and an auction. In each case, causal relationships are both proposed and tested by the system, finding evidence for some and not others. We provide evidence that the insights from these simulations of social interactions are not available to the LLM purely through direct elicitation. When given its proposed structural causal model for each scenario, the LLM is good at predicting the signs of estimated effects, but it cannot reliably predict the magnitudes of those estimates. In the auction experiment, the in silico simulation results closely match the predictions of auction theory, but elicited predictions of the clearing prices from the LLM are inaccurate. However, the LLM's predictions are dramatically improved if the model can condition on the fitted structural causal model. In short, the LLM knows more than it can (immediately) tell.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Central Bank Real Estate Purchases on Asset Prices</title>
<link href="https://hdl.handle.net/1721.1/157083" rel="alternate"/>
<author>
<name>Batista, Quentin</name>
</author>
<id>https://hdl.handle.net/1721.1/157083</id>
<updated>2024-10-03T03:46:32Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Impact of Central Bank Real Estate Purchases on Asset Prices
Batista, Quentin
This paper estimates the impact of central bank real estate purchases on asset prices, demonstrating an increase of 0.1% to 0.2% of Real Estate Investment Trust (REIT) prices in the hours following a typical intervention of 0.014% of market capitalization. At longer horizons, the purchases do not appear to have a significant aggregate effect. The primary identification strategy exploits the nature of the Bank of Japan’s (BoJ) policy rule, which triggers purchases when the Tokyo Stock Exchange Real Estate Investment Trust index falls below a certain threshold. Alternative research designs that exploit the counter-cyclical nature of the BoJ’s policy rule and cross-sectional variation in the eligibility of REITs for BoJ purchases are also considered. Overall, these findings are inconsistent with the predictions of canonical and recent models of asset pricing.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lessons From President Moon Jae In’s Housing Policy and The Road to Affordable Home Ownership in Seoul, South Korea</title>
<link href="https://hdl.handle.net/1721.1/157082" rel="alternate"/>
<author>
<name>Cho, Kibong</name>
</author>
<id>https://hdl.handle.net/1721.1/157082</id>
<updated>2024-10-03T03:42:27Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">Lessons From President Moon Jae In’s Housing Policy and The Road to Affordable Home Ownership in Seoul, South Korea
Cho, Kibong
A fundamental goal of housing policy is to provide a safe and quality place to live for the population. This thesis studies the provision of affordable homeownership in Seoul, South Korea and particularly for non-homeowners and first-time buyers who did not have an opportunity to participate in the housing boom that the previous generations experienced. For Seoul, 58% of the population is non-homeowners. First, this thesis provides a brief introduction to the Korean housing history. Second, it discusses the housing policy under President Moon Jae In, and how housing prices soared under his administration due to misguided efforts. Finally, it describes the necessary path towards mitigating the housing affordability crisis that has been created in Seoul using both supply and demand side arguments.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Case Study in Marketing a Real Estate Debt Fund through the Design and Preparation of a Private Placement Memorandum (PPM) and Investor Presentation</title>
<link href="https://hdl.handle.net/1721.1/157081" rel="alternate"/>
<author>
<name>Poirier, Richard Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/157081</id>
<updated>2024-10-03T03:38:49Z</updated>
<published>2024-09-01T00:00:00Z</published>
<summary type="text">A Case Study in Marketing a Real Estate Debt Fund through the Design and Preparation of a Private Placement Memorandum (PPM) and Investor Presentation
Poirier, Richard Scott
Private equity-backed real estate debt funds play a crucial role in providing capital to borrowers seeking financing for construction projects. These funds raise capital from investors, deploy it strategically, and actively manage debt investments to generate returns for their limited partners. The appeal lies in the potential for attractive yields and risk management strategies in a complex investment landscape. There are countless potential fund structures to address a range of investment strategies, risk profiles, investor appetites, geographic considerations, and manager experience and deal access. This study delves into the dynamics of capital raising for a real estate debt fund specializing in private construction loans. It covers the essential elements of the Private Placement Memorandum (PPM), including legal disclosures, investment terms, risk factors, and fundspecific details. This research aims to provide a real-world example of a fund designed according to current trends and market terms for use by a real-life investment manager, ProBuilder Financial LLC. The PPM and the associated investor presentation utilize best practices for presenting complex financial information in a clear and concise manner. Bridging theory and practice sheds light on the strategies, risk-reward trade-offs, and market implications associated with this capital-raising channel.
</summary>
<dc:date>2024-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Airline operating cost reduction through enhanced engine health analytics</title>
<link href="https://hdl.handle.net/1721.1/119307.2" rel="alternate"/>
<author>
<name>Luu, Henry H. T</name>
</author>
<id>https://hdl.handle.net/1721.1/119307.2</id>
<updated>2024-10-02T03:59:43Z</updated>
<published>2018-01-01T00:00:00Z</published>
<summary type="text">Airline operating cost reduction through enhanced engine health analytics
Luu, Henry H. T
Engine Health Management (EHM) is a comprehensive maintenance service offered by engine manufacturer Pratt &amp; Whitney (PW) to its airline customers. In its current form, engine performance is monitored through recorded physical metrics, such as gas temperature, pressure, and altitude, taken as single snapshots at various phases of flight. The advent of the Enhanced Flight Data Acquisition, Storage and Transmission (eFASTTM) system, which allows for near-continuous recording of engine metrics, provides Full-Flight Data Analytics (FFDA) that may proactively alert and recommend maintenance activity to airlines. Adopting eFASTTM may help avoid Adverse Operational Events (AOE) caused by unexpected engine failures and the associated cost burdens. With respect to operating cost, airlines standardly report Cost Per Available Seat Mile (CASM) and Cost Per Block Hour (CBH). EHM services that prevent operational disruptions can help airlines reduce these unit-cost metrics, whose scrutiny by industry analysts affect investment guidance, stock performance, and overall business outlook. In this study, the value of FFDA services to airlines is investigated on the International Aero Engines V2500, a mature engine with customers' operational histories well-documented. Using a Poisson distribution to model the occurrence of six operational disruption types-Inflight Shutdown, Aircraft-On-Ground, Aborted Takeoff, Air Turn-Back, Ground Turn-Back, and Delay/Cancellation-the cost savings potential is quantified as a function of events avoided by a hypothetical FFDA service. Airline Form 41 financial data from the Bureau of Transportation Statistics is then used to estimate the magnitude of savings on CASM and CBH retroactively for 2012-16. Results show that unit cost reductions of 0.5% to 1.5% are possible through engine event avoidance, representing savings up to $104M annually, but outcomes are highly dependent on assumptions about cost of operational disruptions for each individual carrier. Overall, a baseline model and procedure is developed for valuating FFDA and associated EHM services. Further collaboration between airlines and Pratt &amp; Whitney on data availability and accuracy will help refine this model, which is the first to bridge publicly available airline costs with engine history data, helping stakeholders transition to an eFASTTM ecosystem that promises greater operational efficiency and safety.
</summary>
<dc:date>2018-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Fifty Million Dollar Piece of Dirt: Somerville as a Case Study in Development</title>
<link href="https://hdl.handle.net/1721.1/157055" rel="alternate"/>
<author>
<name>Aizman, Asya</name>
</author>
<id>https://hdl.handle.net/1721.1/157055</id>
<updated>2024-09-27T03:51:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Fifty Million Dollar Piece of Dirt: Somerville as a Case Study in Development
Aizman, Asya
In May, 2023, the City of Somerville achieved the highest S&amp;P Global Ratings AAA credit rating. The accompanying report, citing one gentrifying neighborhood as a “notable contributor to increased market value,” signaled the city’s “attractiveness” to potential investors by promising low interest rates on local real estate development projects. But while the city increasingly appeared to be a sure bet for investors, life became more strenuous for residents, with steep and climbing rents, failing infrastructure, and fewer reasons to stay in a changing city that they no longer recognized. This is a case study of twenty years in Somerville real estate development, spanning 2004 to 2024. Through interviews with residents, activists, and senior city officials, I present a story of a city attempting to rectify its progressive values with the forces of neoliberalism, which it seems unable—and unwilling—to stop.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometric nonlinearities in guyed towers</title>
<link href="https://hdl.handle.net/1721.1/157046" rel="alternate"/>
<author>
<name>McClure, Ghyslaine.</name>
</author>
<id>https://hdl.handle.net/1721.1/157046</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Geometric nonlinearities in guyed towers
McClure, Ghyslaine.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1984; Vita.; Bibliography: leaves 110-114.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing United States Energy Poverty Policy: Regulatory Design Alternatives and Resource Allocation</title>
<link href="https://hdl.handle.net/1721.1/157037" rel="alternate"/>
<author>
<name>Heller, Peter J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157037</id>
<updated>2024-09-25T04:05:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessing United States Energy Poverty Policy: Regulatory Design Alternatives and Resource Allocation
Heller, Peter J.
Guaranteeing sufficient and affordable access to energy services is increasingly critical as climate change continues to worsen, energy costs increase due to the need to meet decarbonization goals, and the trend in general inequality among citizens persists. To ensure the affordability of energy services, in this thesis, I analyze the design of policies and programs addressing energy poverty according to the four strategy decisions that I argue must be made during their ideation: assistance, targeting, funding, and governance. I focus on the strategies designed and implemented in the US and the EU and discuss the benefits and disadvantages of the different approaches followed in both contexts. Based on this comparative analysis, I find there are changes to US federal policy design that should be implemented to better serve households living in energy poverty. Specifically, current allocations under the US Low Income Home Energy Assistance Program (LIHEAP) to states have been nearly static since 1984, while the distribution of energy poverty is dynamic in location and time. To improve the allocation of federal resources, I produce a novel machine learning approach based on sociodemographic and geographical information to estimate energy burden in each US census tract for 2015 and 2020. This analysis reveals an increase in the average household energy burdens, and the range of households experiencing energy poverty broadened. To improve the targeting strategy of LIHEAP, I design an optimized allocation structure that illustrates a shift in funding to the southern US from northern states. To better match household assistance needs, this analysis urges policy makers to revise the distribution of resources to reflect where concentrations of energy poverty exist in the US.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NBA Sleep Tracking Data Imputation</title>
<link href="https://hdl.handle.net/1721.1/157036" rel="alternate"/>
<author>
<name>Licht, Joseph D.</name>
</author>
<id>https://hdl.handle.net/1721.1/157036</id>
<updated>2024-09-25T03:34:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">NBA Sleep Tracking Data Imputation
Licht, Joseph D.
This thesis investigates imputation methods for nights of missing sleep wearable data from NBA Academy athletes. Sparsity in sleep tracking data arises as a result of behavioral non-compliance or device malfunction, hindering the NBA Academy's ability to provide actionable insights that improve player sleep, a crucial component for player development. Motivated by existing work on time series data imputation, four main techniques are evaluated: K-Nearest Neighbors Regression, Linear Interpolation, Linear Regression, and Quadratic Regression. Each technique is applied and evaluated on key sleep metrics such as sleep duration, rMSSD (Root Mean Square of the Successive Differences between Heartbeats), and average heart rate. Results indicate K-Nearest Neighbors Regression and Linear Interpolation, with access to data in the past and future (offline imputation), as the best-performing sleep imputation methods. Furthermore, this thesis utilizes the NBA Academy's shooting and jumping datasets in conjunction with the sleep dataset to explore a relationship between sleep and athletic performance, finding a generally weak correlation between sleep and athletic performance data, regardless of the time lag. This research has applications in all areas of sport and performance as well as in domains where data sparsity is problematic.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Farm-Scale Water Storage in Morocco: Low-Carbon Design with Parametric FEA Optimization</title>
<link href="https://hdl.handle.net/1721.1/157035" rel="alternate"/>
<author>
<name>Trézarieu, Raphaël</name>
</author>
<id>https://hdl.handle.net/1721.1/157035</id>
<updated>2024-09-25T04:03:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Farm-Scale Water Storage in Morocco: Low-Carbon Design with Parametric FEA Optimization
Trézarieu, Raphaël
Morocco faces increasing water scarcity with an anticipated decline in rainfall. Rising temperatures have resulted in drier and denser soil, causing water to be trapped on the surface and evaporate. One solution is to shift water management from large-scale to farm-scale. Underground water reservoirs allow the catchment of sparse rainfall events and the resultant overland flows before their evaporation. This research develops a methodology to design such rectangular reinforced concrete water reservoirs using a parametric approach in Python coupled with a Finite-Element Analysis (FEA) software. The aim is to offer both low embodied carbon and affordable designs, for an individual farmer to build. The first method section is used to identify a small region of the design space containing the Pareto front before running FEA on a limited set of geometries in the second section. In the first section, the global shape of the reservoir and the local structural elements are simultaneously designed using analytical expressions of the Eurocodes on multi-dimensional arrays. One key added value of the method lies in the framework developed to handle numerous arrays of different dimensions, while monitoring the indices of each design variables combinations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fully Differential Programmable Gain Chiplet for Integrated Data Acquisition Systems</title>
<link href="https://hdl.handle.net/1721.1/157034" rel="alternate"/>
<author>
<name>Liu, Monica</name>
</author>
<id>https://hdl.handle.net/1721.1/157034</id>
<updated>2024-09-25T03:35:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Fully Differential Programmable Gain Chiplet for Integrated Data Acquisition Systems
Liu, Monica
Chiplets have risen in popularity since their intermediate level of chip integration allows for high performance, low cost, and higher flexibility. There are currently programmable gain instrumentation amplifier chips on the market, which are widely used in industrial and instrumentation data acquisition systems. However, with built-in operational and fully differential amplifiers, these products cannot be easily upgraded as new and improved amplifiers are released to the market. To address this issue, this thesis proposes the design of a programmable gain chiplet that will offer the desired flexibility in changing a system’s gain, but will add the ability to interface with various amplifiers without sacrificing significant performance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of Gradient Flow with Contrastive Learning</title>
<link href="https://hdl.handle.net/1721.1/157033" rel="alternate"/>
<author>
<name>Tepe, Cem</name>
</author>
<id>https://hdl.handle.net/1721.1/157033</id>
<updated>2024-09-25T04:08:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dynamics of Gradient Flow with Contrastive Learning
Tepe, Cem
Contrastive learning (CL), in di erent forms, has been shown to learn discriminatory representations for downstream tasks without the need of human labeling. In the representation space learnt via CL, each class collapses to a distinct vertex of a simplex on a hypersphere during training. This property, also seen in other types of learning tasks, might explain why CL works as well as it does. Having class collapse on the test distribution, which determines how well the model generalizes to new samples and new classes, is tied to class collapse on the training distribution under certain conditions as studied by Galanti et al. (2022). In the case of CL, minimizing the contrastive loss has been shown to lead to collapse during training by Graf et al. (2021). In a recent study, Xue et al. (2023) show that the minimizing the contrastive loss is not enough to observe class collapse in the representation space for a single layer linear model and that we need minimum norm minimizers for the collapse to happen. However, their results don't explain how class collapse can occur without adding an explicit bias. The implicit bias of the gradient descent is a likely candidate to explain this phenomena. Here, we investigate the gradient ow of the spectral contrastive loss and give a theoretical description of the learning dynamics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Material Recovery Potential from Solar Photovoltaics: Predictive Modeling and Characterization to Advance the Circular Economy</title>
<link href="https://hdl.handle.net/1721.1/157031" rel="alternate"/>
<author>
<name>Bakker, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/157031</id>
<updated>2024-11-15T20:29:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Material Recovery Potential from Solar Photovoltaics: Predictive Modeling and Characterization to Advance the Circular Economy
Bakker, Nicole
In the next two decades, an exponentially growing quantity of waste will be generated as solar panels reach their end-of-life. Meanwhile, demand for new solar capacity will increase the value of key raw materials, underscoring the importance of recycling and movement toward a “circular economy”. However, uncertainties over the quantity and the exact material composition of solar panel waste hamper investments by recyclers, manufacturers, and governments. In this study, I construct a Material Flow Analysis model to forecast the global quantity of recoverable materials through 2100, informed by an experimental characterization of representative solar panels from the 1930s to 2020s. To account for potential changes in future demand, I develop two distinct scenarios: one explores the growing electricity demand from artificial intelligence use (‘Artificial Intelligence Boom’), while the other features renewable hydrogen production for steelmaking, shipping and the chemical industry (‘Green Hydrogen Takes Off’). The combined model predicts a lower material demand for silicon than previously anticipated in the base case, with a cumulative installed solar PV capacity of 50 TW and a waste volume of 3,600 metric megatonnes by 2100. This will require 45 megatonnes of solar-grade silicon by 2100, while 18 megatonnes could theoretically be obtained from recovered material. Achieving a circular economy for silicon is possible by the mid-2040s, but will require recovery rates above 70% and continued improvements in material efficiency as observed in the retrospective analysis. Recovery would suffice for all silicon demand through the mid-2060s, but not through 2100, because the demand for new solar panels and replacements outpace secondary supply. Of specific concern for material recovery is the material composition: results from characterization indicate the presence of toxic materials, including lead, and scarce elements in solar cells.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Meat Me for Supper? Envisioning the Future of Protein Food</title>
<link href="https://hdl.handle.net/1721.1/157030" rel="alternate"/>
<author>
<name>Maynard, Christopher Coleman</name>
</author>
<id>https://hdl.handle.net/1721.1/157030</id>
<updated>2024-09-25T04:02:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Meat Me for Supper? Envisioning the Future of Protein Food
Maynard, Christopher Coleman
This report investigates future challenges associated with protein food and explores two proposed mitigation strategies for overcoming them: dietary change and cultivated meat. Utilizing IMPACT, this report assesses the food security dimensions of availability and economic access for protein food relative to the EAT-Lancet recommendations, projected to 2050, under various shared socioeconomic pathways. This work reveals a near universal over-supply of red meat as well as an under-supply in plant protein across UN member states, even as animal sources of protein far exceed their plant counterparts on a price per kilocalorie basis. Additionally, this report conducts a high level SWOT analysis of key issues in cultivated meat, finding that the technology platform could deliver meaningful environmental and health benefits, but without overcoming important technical and political barriers, will remain unavailable and inaccessible for the foreseeable future. Together, these findings offer insights for food and agricultural policymakers interested in planning and preparing for protein-related issues in the next quarter-century. This report concludes with policy recommendations, intended primarily for the United States.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NeuralMOVES: Extracting and Learning Surrogates for&#13;
Diverse Vehicle Emission Models</title>
<link href="https://hdl.handle.net/1721.1/157029" rel="alternate"/>
<author>
<name>Ramirez Sanchez, Edgar</name>
</author>
<id>https://hdl.handle.net/1721.1/157029</id>
<updated>2024-09-25T03:56:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">NeuralMOVES: Extracting and Learning Surrogates for&#13;
Diverse Vehicle Emission Models
Ramirez Sanchez, Edgar
Technological advancements and interventions in the transportation sector play a crucial role in addressing climate change, given its major contribution to greenhouse gas emissions. The industry actively explores electrification, automation, and Intelligent Infrastructure to mitigate emissions. However, the successful design and implementation of these solutions require accurate and representative emission models. The Motor Vehicle Emission Simulation (MOVES) serves as the gold standard emission software provided by the Environmental Protection Agency (EPA). Despite its prominence, using MOVES faces challenges, including a steep learning curve and technical complexities. This makes it cumbersome for macroscopic analysis and unsuitable for microscopic analyses like eco-driving, which demands emissions estimation for individual steps. To address these issues, we present a comprehensive family of high-performance and lightweight CO₂ emission models devised through reverse engineering MOVES and surrogate learning. Our models show a promising 6% end-to-end error relative to MOVES, exhibit significant differences from alternative reduced-order models, and offer improved precision. The implications of our work are twofold: our models simplify GHG emission evaluation in transportation-related analyses by providing a faster, programmatic alternative to MOVES and improve control-based approaches by offering microscopic and environment feature-rich models compared to alternative models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Investigation into Practical Aluminum Scrap for Emergency Power Fuel in Disaster Response Situations</title>
<link href="https://hdl.handle.net/1721.1/157028" rel="alternate"/>
<author>
<name>Blanks, Lauren J.</name>
</author>
<id>https://hdl.handle.net/1721.1/157028</id>
<updated>2024-09-25T04:07:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Investigation into Practical Aluminum Scrap for Emergency Power Fuel in Disaster Response Situations
Blanks, Lauren J.
As natural disasters become more frequent and severe, pitfalls of emergency logistics are exacerbated. Protracted time between the disaster and the restoration of critical infrastructure, like the power grid, can extend beyond hours or days. In the meantime, communities are left without critical resources like electricity. To address this gap, this research seeks to investigate the possibility of a system that would leverage the debris fields of a disaster to a community's advantage. Building on MIT researchers' activation of high purity aluminum to produce heat and hydrogen in a reaction with water, aluminum scrap from the field could be used to generate hydrogen for fuel cell power systems. Therefore, practical aluminum scrap, specifically the used beverage can, was investigated for its ability to react efficiently and produce hydrogen under the constraints of expeditionary equipment and techniques. Moreover, a preliminary characterization of the reaction's gas output informed the potential for fuel cell contamination. Finally, the proposed system's feasibility within the disaster policy framework is discussed. Together, these findings underscore the potential to harness aluminum scrap as a post-disaster energy source, encouraging further research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Documentation as a Tool for Algorithmic Accountability</title>
<link href="https://hdl.handle.net/1721.1/157026" rel="alternate"/>
<author>
<name>Curtis, Taylor Lynn</name>
</author>
<id>https://hdl.handle.net/1721.1/157026</id>
<updated>2024-09-25T03:16:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Documentation as a Tool for Algorithmic Accountability
Curtis, Taylor Lynn
This thesis argues that civil liability should rest on the deployer's understanding of system behavior, and that documentation is the necessary tool to accomplish this goal. This work begins by establishing the ``hole'' in current approaches to AI risk regulation, the lack of a civil liability regime. It also highlights that civil liability is an already existing and effective regulatory tool that can be applied to AI. The rest of this thesis develops this argument by looking at what is necessary for such a framework to exist. It argues that an understanding of system behaviour is essential and achievable through documentation. It is divided into two substantive chapters. Firstly, Chapter 2 outlines how system behaviour can inform policy through documentation, linking the necessity of documentation to liability and proposing a concrete liability scheme based on documenting system understanding. Secondly, Chapter 3 discusses how documentation can alter a person's understanding of system behaviour, presenting a user study that demonstrates how system understanding can be achieved through documentation and structured data interaction. It argues that testing and system understanding are not insurmountable challenges and that by engaging in a relatively simple process, AI deployers can better understand the behaviour of their models. Overall, this thesis provides a methodical guide to understanding AI system behaviour and the establishment of a new pathway for effective regulation, arguing for the understanding of system behaviour and documentation at deployment as the path forward to achieve civil liability in AI.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-Augmented Interface for Incremental App Development in MIT App Inventor</title>
<link href="https://hdl.handle.net/1721.1/157025" rel="alternate"/>
<author>
<name>Granquist, Ashley</name>
</author>
<id>https://hdl.handle.net/1721.1/157025</id>
<updated>2024-09-25T03:43:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">AI-Augmented Interface for Incremental App Development in MIT App Inventor
Granquist, Ashley
The recent revolutionary advancements in Artificial Intelligence (AI) have presented im- mense opportunities and challenges in computer science education. This thesis presents the development of an AI-powered tool built on top of MIT App Inventor to help students in- crementally design mobile applications. The tool allows students to describe desired changes to their MIT App Inventor mobile applications in natural language and have those changes be implemented automatically. Students can alternate between manually editing their app and using this tool, enabling them to collaborate with AI and incrementally develop apps with a degree of assistance from AI that meets their needs and is appropriate for their skill level and workflow preferences. This thesis also explores the benefits and detriments of such a tool, as well as observations and lessons learned from studying the ways students interact with the tool during a pilot study.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resiliency Oriented Scenario Generation Framework for Natural Gas Infrastructure</title>
<link href="https://hdl.handle.net/1721.1/157024" rel="alternate"/>
<author>
<name>Lahogue, Malo</name>
</author>
<id>https://hdl.handle.net/1721.1/157024</id>
<updated>2024-09-25T03:06:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Resiliency Oriented Scenario Generation Framework for Natural Gas Infrastructure
Lahogue, Malo
Traditionally, NG's impact on power supply has been studied from a reliability perspective, focusing on frequent and low-impact events. Furthermore, power-NG interdependence has been considered at a local scale, with few possibilities for extension to future climate impacts. Our work contributes to a framework for scenario-based resilience quantification of regional power systems under power-NG interdependencies. Specifically, we develop a scenario generation approach to model disruptions in the intra-regional transmission infrastructure as well as supply restrictions due to contingencies in inter-regional NG supply chains. To account for the interregional interdependencies through the import capacity of NG into the regional system, we implement a Long Short-Term Memory (LSTM) model that predicts NG import capacity probability density based on weather conditions along transregional supply pipelines. Our ML model does not require detailed modeling of gas extraction rates and flows along pipelines since such information is not readily available. Furthermore, we develop a sampling procedure to capture low-probability but potentially severe disruption scenarios within the regional transmission infrastructure. To compute the corresponding probabilities, we utilize a physically-based structural reliability model for pipelines. &#13;
 &#13;
Crucially, by sampling the scenarios first and then estimating the corresponding probabilities, we account for low-probability ``rare’’ events that can negatively impact the reliability of power supply. The resulting scenario set enables more precise quantification of power system resilience to correlated transmission and supply disruptions in the NG infrastructure. Since we utilize weather data to forecast NG import capacities as well as compute pipeline disruption probabilities, our work is well-suited for the integration of future climate projections in the risk-sensitive planning and resilient operations of power-NG systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contractor Learning and Home Energy Efficiency in Heat Pump Installations</title>
<link href="https://hdl.handle.net/1721.1/157022" rel="alternate"/>
<author>
<name>Ontiveros, Johnattan H.</name>
</author>
<id>https://hdl.handle.net/1721.1/157022</id>
<updated>2024-09-25T03:37:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Contractor Learning and Home Energy Efficiency in Heat Pump Installations
Ontiveros, Johnattan H.
The displacement of fossil-fuel based heating is essential for achieving decarbonization in the building sector, which represents about a third of national emissions in the United States. Electric heat pumps are the primary technology needed to do so, but widespread adoption is hindered by a variety of factors including higher upfront costs and a shortage of experienced labor to fulfill installations. This work examines the role of learning on the cost and size of heat pump installations throughout the Massachusetts Clean Energy Center (MassCEC) rebate program. We find that as contractors gain experience, heating systems are downsized at the cost of less hours of displaced fossil-fuel based heating. This learning impact is strongest for homes with a natural gas backup heater, which is the cheapest source of heating in Massachusetts followed by electric heat pump heating. We then analyze the structure of the MassCEC rebate, and its potential influence on the benefits of the program.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Community-Driven Determination of Values for Language Models</title>
<link href="https://hdl.handle.net/1721.1/157021" rel="alternate"/>
<author>
<name>Raman, Deepika</name>
</author>
<id>https://hdl.handle.net/1721.1/157021</id>
<updated>2024-09-25T03:02:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Empowering Community-Driven Determination of Values for Language Models
Raman, Deepika
Emerging technologies like Artificial Intelligence and Large Language Models are often developed in Western contexts and carry implicit values, from developer choices or underlying training data, which are not adequately representative of the diverse contexts in which they are deployed. The resultant misalignment from the lack of engagement with non-Eurocentric value paradigms results in inadequate, and potentially harmful outcomes that impact these unconsidered communities. To codify fundamentally subjective human values therefore necessitates the elicitation of these nuances through the inclusion and involvement of these very communities.&#13;
&#13;
This thesis argues that participants’ lack of familiarity with new technologies like Artificial Intelligence impacts their engagement and contribution to participatory processes of AI development. This thesis also helps demonstrate how grounded theory approaches can be leveraged to contextualize awareness-building efforts that can potentially empower community participation by addressing such familiarity gaps.&#13;
&#13;
This two-fold objective of (i)eliciting community-relevant attributes for language model alignment (ii)through the necessary familiarization of the technology in question is demonstrated through the means of sample case studies. A grounded participatory process CALMA (Community-aligned Axes for Language Model Alignment) is designed and evaluated through these cases to illustrate this contextualized alignment exercise. Learnings from this comparative case study are then extended to explore avenues for communities and institutions to adopt similar techniques that center the voices of the final users.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Energy Conservation: Low-Cost Interventions for Commercial and Residential Settings</title>
<link href="https://hdl.handle.net/1721.1/157020" rel="alternate"/>
<author>
<name>Ha, Lan L.</name>
</author>
<id>https://hdl.handle.net/1721.1/157020</id>
<updated>2024-09-25T03:57:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Empowering Energy Conservation: Low-Cost Interventions for Commercial and Residential Settings
Ha, Lan L.
This thesis aims to investigate the effectiveness of low-cost interventions in promoting energy conservation in commercial and residential environments. The first chapter employs social norms to design and analyze three behavioral change programs in a large biopharmaceutical company, with a focus on reducing electricity consumption and plastic waste. The second chapter evaluates the effectiveness of a new behavioral initiative that aims to reduce residential electric and gas consumption. We employ econometric and machine learning techniques to measure average and heterogeneous treatment effects, as well as to identify disparities in households with the highest versus lowest reductions. Covering the process from designing to evaluation, these chapters collectively offer a holistic perspective on the application of low-cost behavioral nudges in both workplace and residential energy usage. The implications drawn from our findings hold significant relevance for corporations, utilities, households, policymakers, and researchers alike, offering invaluable insights in promoting sustainable practices in both the workplace and the home.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Structure of the Registry Hall at Ellis Island</title>
<link href="https://hdl.handle.net/1721.1/157019" rel="alternate"/>
<author>
<name>Wilson, Ruth Hodin</name>
</author>
<id>https://hdl.handle.net/1721.1/157019</id>
<updated>2024-09-25T03:56:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Structure of the Registry Hall at Ellis Island
Wilson, Ruth Hodin
This thesis presents the historical and structural analysis of the Guastavino barrel vault at the Registry Hall on Ellis Island. The Guastavino Construction Company's innovative tile structures from the late 18th and early 19th centuries, characterized by their efficiency in material use and formwork, are not fully understood by many engineers, especially in terms of their structural behavior as unreinforced masonry structures. The unique aspect of the Registry Hall vault is its construction below a steel truss framed ceiling system, a configuration that has not been previously studied.&#13;
&#13;
The primary objective of this study is to provide structural engineers with techniques for analyzing an unreinforced masonry structure in conjunction with a steel frame. Additionally, it aims to provide historical context by exploring how the Registry Hall structure fits into the history of the Guastavino Company. The structural behavior of the system is analyzed through three separate cases:&#13;
&#13;
1. Graphical analysis for the vault alone (Case 1)&#13;
2. Finite element analysis for the truss carrying the entire system (Case 2)&#13;
3. Analysis of the combined system (Case 3)&#13;
&#13;
Case 1 demonstrates the vault is stable on its own and the thrust forces are resolved in the columns. Case 2 demonstrates the truss has the capacity to support all loads, including the weight of the vault. Case 3 presents a third solution where the truss carries half the weight of the vault, indicating the two systems can work together effectively. &#13;
&#13;
This study offers three structural solutions for the complex ceiling at Registry Hall, demonstrating that there are infinite solutions for Guastavino structures. This improved understanding of a Guastavino barrel vaults' structural behavior not only aids in evaluating the current state of Registry Hall, but also lays a foundation for analyzing historic masonry structures that incorporate a steel system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overturning of No-Tension Towers</title>
<link href="https://hdl.handle.net/1721.1/157018" rel="alternate"/>
<author>
<name>Moir, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/157018</id>
<updated>2024-09-25T03:37:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Overturning of No-Tension Towers
Moir, Katherine
This study investigates the overturning behavior of leaning masonry towers on a rigid foundation. Unreinforced masonry is assumed to be incapable of withstanding tension, thus anticipating a progressive fracturing to occur outside the compressive zone of masonry towers as they incline under the force of self-weight alone. A theoretical model for the analysis of rectangular towers is extended to cylindrical towers, where overturning is assumed to occur when the fracture reaches through the entire width of the tower. The results of the theoretical model offer an approximate prediction for the critical angle of inclination that may be reached by a leaning no-tension cylindrical tower of variable slenderness and hollowness. A comparison of the predictions for each of the two tower geometries shows that the predicted critical angles of overturning are very close, while the cylinder is likely to begin cracking at lower inclinations compared to rectangular towers. The theoretical predictions for both rectangular and cylindrical towers are validated experimentally by tilting masonry model towers until failure. The experimental results are found to have reasonable agreement with the predictions, though overturning occurs earlier than predicted in all cases, which is attributed to imperfections in the models and scaling effects. As such, the theoretical predictions are unconservative for the critical angle of overturning of the models in the experiment. Furthermore, two case studies are conducted for existing leaning masonry towers in Italy, where theoretical predictions for their critical angles of overturning are put forth. The results of the case studies indicate that the Garisenda tower in Bologna is relatively close to its theoretical critical inclination, while the Leaning Tower of Pisa is not close. Both towers are found to be very close to their predicted angle of first cracking. However, the assumption of a rigid foundation does not account for the possibility of soil failure which remains a risk for leaning towers on compressible soils. Overall, the research guides further understanding of the failure conditions of masonry towers, which is useful in assessing their safety and preventing catastrophic collapses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tree-based Data Replay for More Efficient LLM Continual Learning</title>
<link href="https://hdl.handle.net/1721.1/157017" rel="alternate"/>
<author>
<name>Bailey, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/157017</id>
<updated>2024-09-25T03:58:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tree-based Data Replay for More Efficient LLM Continual Learning
Bailey, Brian
As Large Language Models (LLMs) gain popularity, they face a crucial challenge: effectively updating their knowledge bases with new data while retaining knowledge of prior information. This challenge is compounded by the considerable computational resources and time required to do so. This problem has been previously addressed using multiple approaches, including data replay, Elastic Weight Consolidation (EWC), and others. This study introduces an evolutionary tree-based data replay method designed to enhance the efficiency of LLMs’ continual training. It leverages the evolutionary relationships among domain-specific data to inform the replay strategy, selectively excluding similar data from the training of current subdomains to optimize efficiency. Initial experiments identified Mistral-7B as the appropriate model for this analysis. Subsequent tests assessed its performance under different data replay configurations, focusing on perplexity as the primary performance measure. The results indicate that focused data replay maintains model performance and enhance training efficiency. Models trained under restrictive replay conditions—excluding data from parent nodes—achieved perplexity scores within 1.5% of the baseline and reduced training time by up to 20%. Moreover, an ablation study established that a minimum replay ratio of 0.4:1 is essential to keep performance within 8.2% of the baseline. The findings suggest significant potential for structured data replay in improving continual learning processes for LLMs. Future research should explore data selection based on similarity metrics or automatic data categorization to enhance scalability and applicability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimally invasive neuromodulation using mechanically-sensitive ion channels and magnetically-actuated nanotransducers</title>
<link href="https://hdl.handle.net/1721.1/157016" rel="alternate"/>
<author>
<name>Malkin, Elian</name>
</author>
<id>https://hdl.handle.net/1721.1/157016</id>
<updated>2024-09-25T03:04:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Minimally invasive neuromodulation using mechanically-sensitive ion channels and magnetically-actuated nanotransducers
Malkin, Elian
Traditional methods of neuronal activity modulation, like pharmacological interventions and noninvasive techniques such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) have limitations in specificity and penetration depth. Deep brain stimulation (DBS), while effective, is invasive and carries surgical risks. This thesis advances the approach of utilizing magnetic nanoparticles as mechanical force transducers to achieve minimally invasive, wireless neuromodulation using magnetic fields as the stimulation modality. By leveraging magnetic fields and mechanically sensitive ion channels, this method aims to provide precise neuronal activation of deep neural circuits without surgery. We describe the molecular biology behind conferring mechanosensation to neurons, the design of a membrane targeting mechanism via SNAPtags expressed on neuronal membranes, and the observed neuromodulatory effects for a gamut of mechanoreceptors and stimulation conditions. Calcium imaging results demonstrate that this method of nanotransducer targeting can elicit neuronal responses at 40mT even via endogenous ion channels, and that greater amplitudes of response can be achieved through mechanosensitive ion channel expression and increased stimulation strength. We also develop data analysis code that is highly automated and employs advanced curve-fitting techniques to isolate the calcium imaging signal from background noise and fluorescence decay. The findings described in this thesis suggest that minimally-invasive mechanical neuromodulation can offer a safe and precise alternative to DBS for both clinical and research applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>New Parallel Algorithms for Planarity Testing</title>
<link href="https://hdl.handle.net/1721.1/157015" rel="alternate"/>
<author>
<name>Hu, Amelia Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157015</id>
<updated>2024-09-25T03:56:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">New Parallel Algorithms for Planarity Testing
Hu, Amelia Y.
Planar graphs (defined as graphs in which no edges cross) have special properties and are often used in applications such as circuit design or transportation networks. While many linear work implementations of planarity testing algorithms exist, to our best knowledge, there is no practical implementation of a parallel planarity testing algorithm. In this thesis, we will describe and analyze two new parallel algorithms for planarity testing, both derived from the Boyer-Myrvold algorithm. First, we will present a divide-and-conquer approach, where the graph's edges are evenly distributed among worker threads. Each thread independently executes the sequential Boyer-Myrvold algorithm on its designated subgraph. Then, pairs of subgraphs are merged by embedding the edges between subgraphs with modified Boyer-Myrvold methods. The primary challenge of the divide-and-conquer approach is the merge step as determining the relative positions of subgraphs is a complicated and difficult process. Next, we describe the design and implementation of a new and simpler parallel algorithm. This algorithm modifies the Boyer-Myrvold algorithm by processing vertices in layers from the bottom-up (rather than sequentially by reverse DFI order). The computation in each layer is parallelized. On planar graphs, this algorithm achieves 2.4--2.7 times speedup over the sequential algorithm when run on 16 cores. On non-planar graphs, the performance gain is even more significant, with speedups ranging from 9 to 22 times on 16 cores.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Data Heterogeneity on Distributed Linear System Solvers</title>
<link href="https://hdl.handle.net/1721.1/157014" rel="alternate"/>
<author>
<name>Velasevic, Boris</name>
</author>
<id>https://hdl.handle.net/1721.1/157014</id>
<updated>2024-12-24T14:40:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effects of Data Heterogeneity on Distributed Linear System Solvers
Velasevic, Boris
We focus on the fundamental problem of solving a system of linear equations. In particular, we are interested in distributed linear system solvers, where one taskmaster coordinates any number of workers to attain a solution. There are two predominant and fundamentally different ways of doing this: optimization-based and projection-based solvers. Although there is extensive literature on both classes of algorithms, a rigorous analytical comparison of their performance is lacking. Consequently, there is no concrete understanding of why numerical experiments show that projection-based solvers tend to perform better in many real and synthetic scenarios. In this work, we develop a framework for such analysis, and we use that framework to investigate the comparison of optimization-based and projection-based solvers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MPrompt: A Pretraining-Prompting Scheme for Enhanced Fewshot Subgraph Classification</title>
<link href="https://hdl.handle.net/1721.1/157013" rel="alternate"/>
<author>
<name>Xu, Muhua</name>
</author>
<id>https://hdl.handle.net/1721.1/157013</id>
<updated>2024-09-25T03:56:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">MPrompt: A Pretraining-Prompting Scheme for Enhanced Fewshot Subgraph Classification
Xu, Muhua
Motivated by the significant progress in NLP prompt learning, there have been great research interests recently in adopting the prompting mechanism for graph machine learning. Despite the prior success of prompting methods applied in node-level and graph-level learning tasks, subgraph-level tasks are highly underexplored, and the potential of prompting remains unclear. This thesis fills this gap by exploring the prompting mechanism for subgraph classification, which is a much more challenging task as it requires understanding both global and local graph structures. In this work, we build upon state-of-the-art self-supervised graph learning models to develop a subgraph-specific prompting scheme Membership Prompt (MPrompt) based on traditional graph neural networks (GNN). Our proposed prompting scheme relies on node membership knowledge to help GNN distinguish between border and local connections, which increases its expressive power while maintaining the prompt’s independence from any specific dataset or model architecture. Additionally, we also present Subgraph Reconstructive Pretraining (SRP) which can provide MPrompt with better structural embeddings during pretraining. Experiments are conducted on both synthetic and real-world datasets, including protein function prediction and social network analysis. Our method demonstrated performance improvement under few-shot experiment setting and maintained comparable performance in full-shot settings while requiring less computation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Analysis of a Transformer-Based Solid-State Relay</title>
<link href="https://hdl.handle.net/1721.1/157012" rel="alternate"/>
<author>
<name>Mondal, Neelambar</name>
</author>
<id>https://hdl.handle.net/1721.1/157012</id>
<updated>2024-09-25T04:04:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Analysis of a Transformer-Based Solid-State Relay
Mondal, Neelambar
Automatic Test Equipment (ATE) systems require relays to perform complex high-speed tests on semiconductor devices. However, existing relays all come up short in some aspect. Electromechanical reed relays have a limited lifetime and slow switching speeds, while solid-state photoMOS relays have high on-resistance and low bandwidth. This thesis presents the design, simulation, and analysis of a new solid-state relay tailored for ATE applications. We use Analog Devices’ iCoupler technology to design this relay, relying on on-chip transformers to provide reliable input-to-output isolation. In Cadence simulations, the iCoupler relay achieves 100 mOhm on-resistance, 7.5 us turn-on time, and 4.8 GHz output 3dB bandwidth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clinical Question-Answering over Distributed EHR Data</title>
<link href="https://hdl.handle.net/1721.1/157011" rel="alternate"/>
<author>
<name>Jiang, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/157011</id>
<updated>2024-09-25T03:02:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Clinical Question-Answering over Distributed EHR Data
Jiang, Emily
Electronic health records (EHRs) have become standard in US clinical practice. However, the distributed, dynamic, private, and jargon-dense nature of medical data is a barrier in harnessing Large Language Models (LLMs) for the domain. Retrievalaugmented generation (RAG), in which an LLM is provided with both the question and context returned by an external retriever, is a promising technique for addressing the unique qualities of clinical text. LLMs using RAG can answer questions about patient records without training on privacy-sensitive data; updated records can also be queried immediately without finetuning. By exposing the source documents that inform the model response, RAG enables greater physician interpretability as well as reduced hallucination, both of which are crucial for safe deployment in healthcare. This thesis presents FedRAG, a retrieval-augmented clinical question-answering (QA) system for clinicians to explore trends in patient data across distributed storage. We introduce a novel hierarchical design for federated document retrieval, in which leaf nodes perform local similarity search while non-leaf nodes route queries based on access policies and aggregate documents returned by their children. We also create a dataset on clinical trends over the MIMIC-IV database for the evaluation of QA systems on EHR data. FedRAG is implemented in Python as a federation of Flask servers using LangChain, the Qdrant vector database for retrieval, and GPT-3.5 Turbo for generation. We present a case study of three medical organizations, and find that the federation scheme results in no loss of quality against a centralized baseline. We explore the impact of resource accessibility among users with varying access permissions, observing that retrieval and generation quality degrade reasonably as document access is restricted. Finally, we evaluate performance in the key abilities required of RAG systems. We conclude that despite remaining challenges in achieving high retrieval quality and noise robustness, FedRAG is effective at synthesizing clinical trends through information integration across EHR documents.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rethinking the Evaluation of Compositional Reasoning for Modern VLMs</title>
<link href="https://hdl.handle.net/1721.1/157010" rel="alternate"/>
<author>
<name>Huang, Irene Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/157010</id>
<updated>2024-09-25T03:31:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Rethinking the Evaluation of Compositional Reasoning for Modern VLMs
Huang, Irene Y.
Recent advancements in modern Vision-Language Models (VLMs), comprising a visual encoder coupled with a Large Language Model (LLM) decoder, have demonstrated remarkable proficiency in Compositional Reasoning (CR). CR entails grasping the significance of attributes, relations, and word order. This prompts a crucial question: have VLMs effectively tackled the CR challenge? Our conjecture suggests that existing CR benchmarks may not adequately push the boundaries of modern VLMs due to their reliance on a negative text generation pipeline. Consequently, the negatives produced often deviate either as outliers from the natural language distribution learned by VLMs’ LLM decoders or as improbable within the corresponding image context. To redress these limitations, we propose a novel pipeline integrating GPT-4V alongside a suite of contemporary open-source VLMs. Through the application of in-context-learning and prompt engineering methodologies, our pipeline autonomously generates, evaluates, and selects challenging compositional reasoning questions, to establish a robust CR benchmark, also subsequently validated manually. The meticulously curated dataset evinces a noteworthy, up to 45%, decrease in CR performance compared to preceding benchmarks, thereby reinstating the CR challenge even for state-of-the-art VLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Implementation of the U.S. Hydrogen Production Tax Credit</title>
<link href="https://hdl.handle.net/1721.1/157008" rel="alternate"/>
<author>
<name>Giovanniello, Michael A.</name>
</author>
<id>https://hdl.handle.net/1721.1/157008</id>
<updated>2024-09-25T04:05:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling and Implementation of the U.S. Hydrogen Production Tax Credit
Giovanniello, Michael A.
Low-carbon hydrogen (H2) could contribute to achieving long-term climate goals by supporting the decarbonization of several hard-to-abate industries. The U.S. Inflation Reduction Act includes a tiered hydrogen production tax credit (PTC) awarded for producing H2 below certain emissions thresholds. One pathway for producing PTC-eligible H2 is water electrolysis supplied with low-carbon electricity. But assessing the systems-level emissions associated with electrolytic H2 is challenging, not only because instantaneous power flows from a particular producer cannot be directly associated with a particular user, but also because of the risk that electrolyzers might divert clean electricity away from the grid. Following the passage of the IRA, there has been a vigorous debate focusing primarily on the time-matching requirements — that is, the period over which electricity use must match production from contracted generators — for grid-connected H2 production to receive the PTC.&#13;
&#13;
Applying a macro-energy systems model to case studies of Texas and Florida, we show that divergent results in the literature, which presented a conundrum for regulators trying to pick between policy options, are explained by different interpretations of the proposed “additionality” requirement. Specifically, the emissions associated with H2 production under different “time-matching” requirements are conditional on how additionality is modeled. We further show that the interaction of these qualifying time-matching requirements with other energy system policies could reduce the merits of more stringent time-matching requirements. For instance, if a region has a relatively high renewable portfolio standards (RPSs) to enable grid decarbonization, we show that less stringent (and therefore less costly) time-matching requirements are sufficient to avoid any increases in system-level emissions. &#13;
&#13;
Building on this analysis, we explore how uncertainty in inter-annual variable renewable energy (VRE) generation complicates the implementation of stringent PTC requirements. We confirm that a system design that accounts for inter-annual VRE uncertainty comes at a cost premium — a reality ignored by the existing literature. In addition, we show that inter-annual VRE uncertainty will necessitate the formation of markets for hourly electricity attribution certificates (EACs) to make up for inevitable shortfalls in supply of contracted VRE electricity supply under an hourly time-matching requirement. &#13;
&#13;
We recommend that the Treasury adopt a phased and regionally differentiated approach to implementing the PTC — regions without RPS policies could transition to an hourly time-matching requirement in the mid-term (e.g., by 2030), whereas regions with sufficient RPS policies could continue with looser requirements. In addition to PTC implementation, these results are relevant to the broader field of Scope 2 emissions accounting for voluntary (e.g. corporate net-zero goals) and regulatory purposes. As more private enterprises, such as data centers owners, pursue voluntary measures to reduce their electricity-related emissions, our work provides a foundation for further research into clean energy procurement standards (voluntary or mandated) that support power sector decarbonization.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tradeoffs Between Aboveground and Soil Carbon Accumulation Following Forestation</title>
<link href="https://hdl.handle.net/1721.1/157007" rel="alternate"/>
<author>
<name>Schug, Jennifer Lin</name>
</author>
<id>https://hdl.handle.net/1721.1/157007</id>
<updated>2024-09-25T03:54:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tradeoffs Between Aboveground and Soil Carbon Accumulation Following Forestation
Schug, Jennifer Lin
Recent decades have seen a rapid increase in global warming due to anthropogenic greenhouse gas emissions. One prevalent climate change mitigation strategy is tree planting, as trees sequester large amounts of carbon in their aboveground biomass. However, there is emerging evidence that under some conditions, soil carbon decreases following forestation, offsetting the carbon accumulated aboveground and rendering carbon sequestration efforts ineffective. The factors driving these changes in net ecosystem carbon are currently unknown. Here, we conducted a global meta-analysis on the factors affecting aboveground biomass versus soil carbon (SOC) accumulation following forestation in grasslands and croplands. We considered the effects of prior land use, regrowth strategy, mycorrhizal associations, and environmental factors on total ecosystem carbon and SOC accumulation over time. Results indicate that while there is a tradeoff between SOC and aboveground carbon accumulation, the loss of SOC does not negate the increase in aboveground carbon following forestation. Sites with low initial SOC before forest establishment accumulate more SOC than sites with high SOC, regardless of prior land use. Overall, forest stand age, prior land use, regrowth strategy, and mycorrhizal associations drive carbon accumulation over time and should be considered in the context of future forestation projects implemented for carbon sequestration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Segment Anything on the Edge</title>
<link href="https://hdl.handle.net/1721.1/157006" rel="alternate"/>
<author>
<name>Stiles, Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/157006</id>
<updated>2024-09-25T03:02:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Efficient Segment Anything on the Edge
Stiles, Nicole
The Segment-Anything Model (SAM) is a vision foundation model facilitating promptable and zero-shot image segmentation.  SAM-based models have a wide range of applications including autonomous driving, medical image segmentation, VR, and data annotation.  However, SAM models are highly computationally intensive and lack a flexible prompting mechanism.  On an NVIDIA A100 GPU, SAM runs at 11 frames/second, missing the mark for real-time performance and preventing the usage of SAM on edge devices.  To tackle both the latency constraint and the prompt flexibility constraint, we introduce GazeSAM, a new real-time gaze-prompted image segmentation model.  GazeSAM uses face and gaze detection to determine the direction of a user's gaze, object detection to find candidate objects of interest, depth estimation to perform background detection, and image segmentation to generate masks.  The final output is a mask segmenting the object at the focus of the user's gaze.  By performing algorithmic optimizations, employing inference engines, and applying FP16 and INT8 quantization, we achieve a 24x speedup relative to the baseline FP32 PyTorch implementation.  GazeSAM runs at a speed of over 30 FPS, enabling real-time performance on an RTX 4070 GPU.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soil moisture-based drought monitoring using remote sensing over Africa</title>
<link href="https://hdl.handle.net/1721.1/157005" rel="alternate"/>
<author>
<name>Lu, Catherine S.</name>
</author>
<id>https://hdl.handle.net/1721.1/157005</id>
<updated>2024-09-25T04:10:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Soil moisture-based drought monitoring using remote sensing over Africa
Lu, Catherine S.
Agricultural droughts, or persistent deficits in soil moisture, can have severe consequences on crop production and can result in economic crisis and widespread food insecurity. The impacts of drought are especially relevant in Africa, where agriculture is largely supported by rainfall. Currently, drought monitoring systems for Africa are not as prevalent on the continental scale and are limited in the number of in-situ observations for model validation, in contrast to developed regions. In this study, we use soil moisture data gathered from the Soil Moisture Active Passive (SMAP) mission with dates ranging from April 2015 to December 2023, in order to develop a drought monitoring system that incorporates seasonality and climatology. Monthly drought thresholds are developed based on percentiles of soil moisture found in previous literature, creating location-specific thresholds of drought for each month. This data was applied at the continental, regional, and country level to reconstruct historical records of drought throughout the SMAP time record (time series) and localities of drought intensities for a given time period (drought maps). Additionally, a methodology of exponential time filtering is explored to convert surface soil moisture from SMAP into root-zone soil moisture, which can be more relevant for agricultural production. The reconstructed historical drought results align with literature on drought events in regions of Africa (e.g. 2017-18 drought anomalies). For future events, this study could inform drought monitoring through remote sensing and allow for measures of drought response to improve overall food security.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-economic Analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants</title>
<link href="https://hdl.handle.net/1721.1/157003" rel="alternate"/>
<author>
<name>Araiinejad, Layla</name>
</author>
<id>https://hdl.handle.net/1721.1/157003</id>
<updated>2024-09-25T03:09:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Techno-economic Analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants
Araiinejad, Layla
This thesis presents the techno-economic analysis of Deuterium-Tritium Magnetic Confinement Fusion Power Plants (FPP), tailored to enhance the economic viability and scalability of FPPs in response to global energy challenges and climate change. Amidst a backdrop of substantial investments in fusion technology, totaling $6.2 billion to date, this study critically assesses the overnight capital costs of a FPP that hosts ARAI, a 350 MWe tokamak reactor based on the MIT ARC fusion concept. This research evaluates the economic viability of constructing an Nth-of-a-kind ARAI-FPP. The overnight capital costs for ARAI-FPP are estimated to range between $8,800/kW and $22,200/kW, with this variation largely driven by differing regulatory and manufacturing assumptions. The overall cost breakdown is found to be similar to past and recent fusion literature, where the direct cost of fusion reactor equipment is the largest cost driver. The Levelized Cost of Electricity is estimated to be between $140/MWh and $550/MWh. The findings aim to deepen the understanding of absolute and relative cost drivers in fusion energy and suggest strategies to improve its economic feasibility. The analysis highlights the significant role of fabrication costs and regulatory frameworks in influencing cost dynamics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Force Dynamics of the Rat Lateral Gastrocnemius Muscle after Undergoing Sensory Protection</title>
<link href="https://hdl.handle.net/1721.1/157001" rel="alternate"/>
<author>
<name>Gutierrez Arango, Samantha</name>
</author>
<id>https://hdl.handle.net/1721.1/157001</id>
<updated>2024-09-25T03:35:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Force Dynamics of the Rat Lateral Gastrocnemius Muscle after Undergoing Sensory Protection
Gutierrez Arango, Samantha
The sensory protection procedure, involving the reinnervation of a motor-denervated muscle with a sensory nerve, has shown promise in preserving muscle function and structure. This thesis investigates the impact of sensory protection on the force dynamics and muscle architecture of the lateral gastrocnemius muscle in a rat animal model. Using a within-subjects experimental design, this preliminary study compared Sensory Protected and contralateral Intact muscles within a cohort of four rats. In situ ergometry experiments suggest that normalized Force-Velocity-Power (FVP) properties may be largely preserved after sensory protection, with small percent differences in normalized FVP curves between the Sensory Protection muscles versus contralateral muscle controls. Key FVP parameters such as peak velocity and specific peak power exhibited higher percent differences for the Sensory Protected muscles, but lower percent differences in pennation angles and physiological cross-sectional area, suggesting that sensory reinnervation may influence muscle structure and fundamental force dynamics. Despite limitations, such as the small sample size, the study lays the groundwork for future research investigating the cellular and molecular mechanisms underlying the observed changes. The findings highlight the potential of Sensory Protected muscles as biological actuators in prosthetic devices, and suggest that sensory reinnervation may be a promising strategy to maintain or restore muscle function in individuals with motor impairment.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Translations: Designing Restorative Listening Experiences in the Age of Social Fragmentation</title>
<link href="https://hdl.handle.net/1721.1/156998" rel="alternate"/>
<author>
<name>Obeng-Marnu, Naana</name>
</author>
<id>https://hdl.handle.net/1721.1/156998</id>
<updated>2024-09-25T03:48:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Translations: Designing Restorative Listening Experiences in the Age of Social Fragmentation
Obeng-Marnu, Naana
This thesis builds on a body of sociotechnical research at the MIT Center for Constructive Communication that draws upon "ancient wisdoms" of dialogue and listening and harnesses the power of technology to inform the design of dialogue  spaces that promote deep, meaningful, and authentic conversations. Our approach hinges on the belief that society functions best when we hear and understand each other, an outcome that our work strives to advance by exposing people to the personal stories of others in ways that connect rather than divide. I take inspiration from anthropological practices and recent Data Humanism and Activism epistemologies to derive a set of design considerations for restorative interfaces. These principles inform the development of Translations, an interactive experience that invites audiences to more deeply engage with a curated collection of stories surfaced during small group facilitated conversations. The design of this visual and auditory experience allows audiences to explore stories they may otherwise not hear through websites that center thematic summaries and high level insight visualizations. The selected stories are curated using AI emotion analysis and sensemaking which are leveraged to draw the user’s attention to moments of interest across conversations, such as moments of affirmation. The efficacy of this curation method to engender empathy and emotional disruption, precursors to restorative listening, is evaluated and the results from user tests for and interviews about the overarching interface are discussed. Ultimately, this thesis presents both a framework for automatic curation of audio narratives as well as an interactive interface to present these selected stories, both of which have wide-ranging applications in the media and civic space.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Energy-Efficient Real-Time Hardware Acceleration for Gaussian Fitting</title>
<link href="https://hdl.handle.net/1721.1/156997" rel="alternate"/>
<author>
<name>Wojtyna, Adrianna D.</name>
</author>
<id>https://hdl.handle.net/1721.1/156997</id>
<updated>2024-09-25T03:10:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Energy-Efficient Real-Time Hardware Acceleration for Gaussian Fitting
Wojtyna, Adrianna D.
Micro-robots play an important role in numerous tasks, including search and rescue, exploration, and navigation. A significant challenge to their deployment is their limited energy capacity, which constrains the computation such systems can complete. Specifically, 3D mapping algorithms significantly contribute to the compute power footprint as a result of repeated memory accesses. A promising approach involving Gaussian Mixture Models (GMMs), Single-Pass Gaussian Fitting (SPGF) algorithm, allowed for real-time 3D mapping with minimal memory and energy requirements due to its single-pass processing of input data. To further decrease demonstrated energy results, we propose the design of an FPGA (Field Programmable Gate Array)-based hardware accelerator that enables Gaussian fitting based on the SPGF algorithm with 10.4× lower energy per image (based on post-implementation power analysis), compared to the original, software implementation. By using fixed-point numerical representation and concurrent processing of data inputs, our proposed hardware accelerator, when operating at 100MHz, is capable of processing depth images at an average rate of 303.09 frames per second (fps), allowing for 7.97× improvement compared to the original software implementation of SPGF (32fps). We also demonstrated 46.1× lower average FPGA resource utilization compared to the previously proposed hardware accelerator for GMMs. Our proposed design was demonstrated as part of the complete subsystem, allowing for visualization of the constructed map in real-time. The proposed design was demonstrated to perform at 100MHz in isolation and verified for its performance with a 50MHz subsystem on AMD Virtex UltraScale+ VCU118 FPGA.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimation of County-Level Evapotranspiration and Irrigation using &#13;
High-Resolution Planet Satellite Data</title>
<link href="https://hdl.handle.net/1721.1/156996" rel="alternate"/>
<author>
<name>Wickman, Sydney</name>
</author>
<id>https://hdl.handle.net/1721.1/156996</id>
<updated>2024-09-25T03:54:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Estimation of County-Level Evapotranspiration and Irrigation using &#13;
High-Resolution Planet Satellite Data
Wickman, Sydney
Increased agricultural production has spurred the need for irrigated land in areas that may not be supported by surface water. Instead, groundwater is primarily used for irrigation in states such as Kansas to supplement the water needed for this land. The increase in groundwater use for irrigation may be contributing to areas of increasing groundwater decline, and more precise tracking of irrigation should take place on a larger, regional scale. This will allow for more effective tracking of irrigation trends and their possible effects. This thesis tests the challenges and possibilities of applying the Backward-Averaged Iterative Two-Source Surface temperature and energy balance Solution (BAITSSS) model with high-resolution PlanetScope (Planet) satellite data to the county of Cheyenne, Kansas. The drop of reflectance data observed in fields from Planet satellite data was used as a signal for the first irrigation event, and the model subsequently ran from there. The results from this demonstrate that the BAITSSS evapotranspiration (ET) is comparable to the OpenET model; BAITSSS overall estimates a higher ET in agricultural areas compared to OpenET. However the irrigation results are underestimated, but there are many limiting factors that could be adjusted with further consideration. More research should be conducted toward the efficient and effective running of the BAITSSS model on a larger region.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Smartwatch App for Automated Targeted Memory Reactivation</title>
<link href="https://hdl.handle.net/1721.1/156994" rel="alternate"/>
<author>
<name>Podrug, Anita</name>
</author>
<id>https://hdl.handle.net/1721.1/156994</id>
<updated>2024-09-25T03:48:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing a Smartwatch App for Automated Targeted Memory Reactivation
Podrug, Anita
Targeted Memory Reactivation (TMR) experiments have shown potential in enhancing learning and memory by pairing sensory stimuli with specific memories during learning and reintroducing these stimuli during slow-wave sleep. This process aids in memory consolidation, where recent neural representations are reactivated and transferred to long-term storage. Traditionally, TMR has been limited to laboratory settings. For my thesis, I developed a TMR system usable at home and investigated the effectiveness of this system on memory recall of a nature documentary, using vibration as a stimulation cue. I developed a machine-learning model that performs sleep stage classification from heart rate and motion data that can be collected from a smartwatch in real-time. Using this model, the smartwatch was programmed to deliver TMR cues when participants enter stage N3 (slow-wave) sleep. This TMR system was found to improve recall 24 hours and 1 week after the initial learning, but the results were not found to be statistically significant due to an insufficient amount of data. Further studies would be required to confirm these results. This advancement of at-home TMR can be extremely useful for further understanding sleep’s role in memory and can provide a system to be used by the general public for improving their learning and memory. Additionally, the development of an automated real-time sleep-stage classification model can enable more reliable and better quality experiments to be used on a variety of sleep studies in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Extracting and Analyzing Political Content on TikTok</title>
<link href="https://hdl.handle.net/1721.1/156993" rel="alternate"/>
<author>
<name>Fadel, Marie Diane</name>
</author>
<id>https://hdl.handle.net/1721.1/156993</id>
<updated>2024-09-25T04:09:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Methods for Extracting and Analyzing Political Content on TikTok
Fadel, Marie Diane
In this thesis, I investigate the dynamics of political discourse on TikTok, with a focus on crafting a comprehensive methodology for extracting and analyzing political content related to the 2024 U.S. Presidential Election. This research utilizes a blend of advanced computational tools and crowd-sourced evaluations to delve into the mechanisms through which political influence is both exerted and perceived on the platform. For data collection, the study employed TikAPI, a tool designed for systematic scraping of TikTok videos, which targeted specific political hashtags to amass a substantial dataset. This dataset was analyzed using a variety of innovative methods, including snowball sampling to ensure a representative range of political engagement, and integration with Python to automate the data collection process. Additionally, I utilized Large Language Models (LLMs) to evaluate the relevance and persuasive impact of the content, and these machine-generated insights were then benchmarked against human judgments. Overall, the findings indicate a slight preference for Republican discourse on TikTok. Moreover, I demonstrate that OpenAI’s GPT can effectively classify videos by topic, although human input remains essential for more nuanced tasks such as stance detection and evaluation of persuasive effect. This exploration into the political landscape of TikTok represents one of the first of its kind, with the primary aim of this thesis being to develop a methodology that will support future research in this field.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Developmental Change in Ego-Motion Experience Across Infancy</title>
<link href="https://hdl.handle.net/1721.1/156992" rel="alternate"/>
<author>
<name>Fuchs, Ariel</name>
</author>
<id>https://hdl.handle.net/1721.1/156992</id>
<updated>2024-09-25T04:05:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring Developmental Change in Ego-Motion Experience Across Infancy
Fuchs, Ariel
Humans flexibly and intuitively use vision to plan and guide navigation through the local environment. How does this ability develop in infancy? One possibility is that the development of visual representations for navigation is driven by passive exposure to the visual statistics of scenes. Another possibility is that active navigation experience using vision to plan and guide locomotion is the driving factor. In order to distinguish between these two hypotheses, it is necessary to understand the nature of infants’ early visual scene experience itself. Surprisingly little prior work has characterized infants’ early experiences with ego-motion through scenes, before and after learning to locomote. We use ecological momentary assessments to quantify infants’ exposure to ego-motion through scenes, and how that changes with locomotor experience. We found that pre-crawling infants who have never independently navigated already experience significant passive visual exposure to forward-facing ego-motion through scenes. Nevertheless, this experience increases substantially with age and locomotor status.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Contextual Predictability and Phonetic Reduction</title>
<link href="https://hdl.handle.net/1721.1/156991" rel="alternate"/>
<author>
<name>Martin, Kinan R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156991</id>
<updated>2024-09-25T03:29:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Contextual Predictability and Phonetic Reduction
Martin, Kinan R.
Phonetic reduction is a process which alters the acoustic quality of a sound, often a vowel or word, to a perceived weaker or shorter state. Previous research suggests that the degree of reduction of a word is influenced by the contextual predictability of words in the context. However, the nature of how the context direction and size governs phonetic reduction has not been thoroughly explored. The advancement of self-supervised language models provides a means to assign meaningful estimates of word predictability conditioned on different contexts. This paper explores the effect of contextual predictability on phonetic reduction making use of such models. We train instances of GPT-2 on different context directions (past, future, and bidirectional) and context sizes (bigram vs. sentence) to provide measures of conditional word predictability, then use linear regression to quantify their correlation with a measure of phonetic reduction (word duration). Our results provide evidence suggesting that the contextual probability of a word given the following context correlates with word duration more strongly than the past context and the bidirectional contexts for both context sizes, suggesting that phonetic reduction may be a reliable indicator of reduced cognitive load in a speaker’s planning of the rest of an utterance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generation, Detection, and Evaluation of Role-play based Jailbreak attacks in Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156989" rel="alternate"/>
<author>
<name>Johnson, Zachary D.</name>
</author>
<id>https://hdl.handle.net/1721.1/156989</id>
<updated>2024-09-25T04:01:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Generation, Detection, and Evaluation of Role-play based Jailbreak attacks in Large Language Models
Johnson, Zachary D.
While directly asking a Large Language Model (LLM) a harmful request (e.g. "Provide me instructions on how to build a bomb.") will most likely yield a refusal to comply due to ethical guidelines laid forth by developers (e.g. OpenAI), users can trick the LLM into providing this information using a tactic called a Role-play based Jailbreak Attack. This attack consists of instructing the LLM to take on the role of a fictional character that does not adhere to the model developer’s ethical guidelines and will comply with any request. Role-play based jailbreak attacks remain a critical safety issue and open-ended research question due to their success in getting a LLM to comply with a harmful request, as well as their ability to be generated without a formal technical background. Companies such as OpenAI employ manual tactics like red-teaming in order to enhance a LLM’s robustness against these attacks, however these tactics may fail to defend against all role-play based jailbreak attacks due to their potentially limited ability to predict unseen attacks. In this work, we aim to better understand the landscape of role-play based jailbreak attacks so that we can precisely detect these attack attempts in the wild before they yield a harmful output from a LLM. Specifically, we focus on three main categories: generating synthetic examples of role-play based jailbreak attack prompts, testing these role-play prompts on a target LLM in order to evaluate whether they successfully jailbreak the LLM and labeling our prompts accordingly, and training a robust detection model that can precisely predict whether a role-play prompt will successfully yield a jailbreak attack in a LLM before being fed any malicious requests. Through these processes, we learn the following information, respectively. 1) Out-of-the-box models such as GPT-4 are effective at generating successful role-play jailbreak attack prompts when being generated on just a few examples via fewshot prompting. 2) We can automatically classify LLM responses as jailbroken or not with high accuracy using statistical methods including Principal Component Analysis (PCA) and Support Vector Machines (SVMs). 3) Most classification architectures are unable to perform the complex task of accurately predicting whether a role-play prompt will successfully yield a jailbreak attack. By better understanding the nature of role-play based jailbreak attacks, we hope to be able to contribute to the research area of jailbreak attack detection in LLMs so that they can be robustly defended against in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benchmarking Graph Transformers Toward Scalability for Large Graphs</title>
<link href="https://hdl.handle.net/1721.1/156988" rel="alternate"/>
<author>
<name>Lim, Katherine S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156988</id>
<updated>2024-09-25T03:05:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Benchmarking Graph Transformers Toward Scalability for Large Graphs
Lim, Katherine S.
Graph transformers (GTs) have gained popularity as an alternative to graph neural networks (GNNs) for deep learning on graph-structured data. In particular, the self-attention mechanism of GTs mitigates the fundamental limitations of over-squashing, over-smoothing, and limited expressiveness that GNNs face. Furthermore, like transformers used for natural language processing and computer vision, GTs have the potential to become foundation models that can be used for various downstream tasks. However, current GTs do not scale well to large graphs, due to computational cost. Here, we formulated a GT architecture as part of a larger scheme to build a GT made scalable through hierarchical attention and graph coarsening. Specifically, our goal was to optimize the GT building block of the scalable GT. By adding GraphGPS-inspired message-passing neural network (MPNN) layers to a modified version of the Spectral Attention Network (SAN) and performing hyperparameter tuning, we built a GT architecture that performs comparably to GraphGPS on the node classification task on the Cora and CiteSeer datasets. Compared to the modified version of SAN that we started with, our architecture is faster to train and evaluate, and also obtains higher node classification accuracies on the Cora and CiteSeer datasets. Our results demonstrate how message passing can effectively complement self-attention in GTs such as SAN to improve node classification performance. With further architectural improvement, we expect our model to serve as an effective building block for scalable GTs. Such scalable GTs may be used for node classification on large graphs, a common task for industrial applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Learning for Generative Scene Editing and&#13;
Motion</title>
<link href="https://hdl.handle.net/1721.1/156987" rel="alternate"/>
<author>
<name>Fang, David S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156987</id>
<updated>2024-09-25T03:53:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Unsupervised Learning for Generative Scene Editing and&#13;
Motion
Fang, David S.
Unsupervised learning for images and videos is important for many applications in computer vision. While supervised methods usually have the best performance, the amount of data curation and labeling that supervised datasets require makes it difficult to scale. On the other hand, unsupervised learning is more scalable, generalizable, and requires much less data curation, but is harder because it lacks a clear target objective. In this thesis, we propose two distinct lines of unsupervised learning work with generative applications: 1) BlobGSN and 2) optical flow estimation and flow generation with diffusion models. BlobGSN explores the unsupervised learning of spatially disentangled mid-level latent representations for 3D scenes in a generative context. Within this generative framework, we show that BlobGSN facilitates novel scene generation and editing. In a different vein, current state-of-the-art optical flow learning models rely on ground truth data collection for sequences of frames in videos. Unsupervised learning of optical flow, which would not require ground truth data, could theoretically leverage any publicly available video data for training. We explore different frameworks for unsupervised optical flow learning to tackle different problems such as photometric error, occlusion handling, and flow smoothness. Additionally, we propose a generative framework for generating optical flow from a single frame that can be trained in an unsupervised manner.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery Blueprint: Saudi Arabia’s Strategic Foray into the Battery Value Chain</title>
<link href="https://hdl.handle.net/1721.1/156985" rel="alternate"/>
<author>
<name>Alhakbani, Alanoud</name>
</author>
<id>https://hdl.handle.net/1721.1/156985</id>
<updated>2024-09-25T04:00:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Battery Blueprint: Saudi Arabia’s Strategic Foray into the Battery Value Chain
Alhakbani, Alanoud
This thesis evaluates Saudi Arabia’s potential to establish a foothold in the global battery industry, an industry that would be pivotal for its energy transition and economic diversification goals. Key enablers such as Saudi Arabia’s commitment to renewable energy and industrial growth in adjacent sectors, including automotive and refinery, provide a foundation for entry into the battery value chain. However, the Kingdom must navigate barriers such as market competition and the need for technological expertise in advanced battery production, a market led by heavyweights like China and innovators across the globe. This study assesses the viability of a bottom-up technology catch-up approach for industrial competency in battery technology—a contrast to the top-down models employed by established players. The research comprises an in-depth analysis of enablers and barriers for technology catch-up utilizing a proposed assessment framework, and strategies for effectively localizing different parts of the battery value chain. The outcome aims to offer a strategic blueprint for Saudi Arabia to capitalize on the burgeoning demand for battery technology and enhance its global economic stature.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cracking Common Notions Relating Egg Strength to Impact&#13;
Orientation</title>
<link href="https://hdl.handle.net/1721.1/156984" rel="alternate"/>
<author>
<name>Sutanto, Antony</name>
</author>
<id>https://hdl.handle.net/1721.1/156984</id>
<updated>2024-09-25T03:19:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Cracking Common Notions Relating Egg Strength to Impact&#13;
Orientation
Sutanto, Antony
The chicken egg possesses a shell structure that is conventionally thought to be strongest when loaded on its vertical poles, particularly the sharp end, which resembles a structural arch. This notion has influenced educational activities such as the "egg drop challenge", where participants typically orient the egg with its sharp end facing downwards to improve its chances of resistance to fracture upon impact. This study tests this conventional wisdom by investigating the egg's strength, or energy sustained before rupture, depending on its orientation. First, static compression tests were conducted to determine the maximum energy absorbed by the egg based on its compression axes. Eggs yielded greater deformations and energy absorbed before rupture when compressed horizontally rather than vertically, suggesting potential advantages under dynamic loading conditions. To validate that these trends also held under dynamic loading, drop tests from varying heights were performed to assess the kinetic energy required to fracture the egg. Contrary to intuitive understanding, eggs dropped on their equators could undergo greater drop heights without rupturing compared to those dropped on their vertical poles. This unexpected finding challenges the prevailing notion of the egg's structure and suggests a new perspective on its impact behavior.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enforcing Identification and Authentication Policies at Scale in a Cloud Microservices Architecture</title>
<link href="https://hdl.handle.net/1721.1/156983" rel="alternate"/>
<author>
<name>Sinha, Varnika</name>
</author>
<id>https://hdl.handle.net/1721.1/156983</id>
<updated>2024-09-25T03:59:34Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enforcing Identification and Authentication Policies at Scale in a Cloud Microservices Architecture
Sinha, Varnika
As cloud adoption increases, cloud providers are competing to build more robust and secure platforms to keep growing and attract more users by ensuring their data is highly available but not susceptible to malicious attacks. Many cloud platforms are distributed systems based on a microservices architecture where many services communicate with one another. Communication among services should be authenticated to implement security in depth and not just rely on the security of networks and infrastructure. However, these services can be on the order of hundreds or thousands, which increases the number of specialized secrets needed to provide authentication. This means that systems like these involve a large number of secrets. These large numbers of secrets are hard to manage and track in the case of exposure, which leads to a risk of misconfiguration and leaks. We implement a framework that accounts for these secrets by managing the creation, rotation, and deletion in accordance with the existing architecture of the platform with a Kubernetes custom resource and controller and ensure that a secret with the correct permissions is always present when needed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hollywood Workers vs Tech: In Theory and In the News</title>
<link href="https://hdl.handle.net/1721.1/156982" rel="alternate"/>
<author>
<name>Cmehil-Warn, Christian</name>
</author>
<id>https://hdl.handle.net/1721.1/156982</id>
<updated>2024-09-25T03:28:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hollywood Workers vs Tech: In Theory and In the News
Cmehil-Warn, Christian
The 2023 SAG-AFTRA and WGA strikes on Hollywood were notable because of their explicit ties to technology and labor’s changing relationships. In particular, disputes around using generative AI in the workplace were widely reported in the news. This thesis examines the Hollywood strikes in two parts. The first part takes a political economy approach to examine the underlying causes of these changes in technology-labor relations. In particular, the thesis argues that an industry shift to distribution via streaming services alongside increased vertical integration brought about new imperatives to production and exponentially increased levels of data capture, enabling the labor conditions that led to the strike. Theories of creative labor and technology-labor relations are used to describe the tensions. The resulting SAG-AFTRA and WGA collective bargaining agreements are then examined within these framings. The second part of the thesis quantitatively explores the relationship between news media (which its own complex relationship with technology) and the Hollywood strikes using natural language processing techniques. Sentiment analysis and sentence embeddings are used to quantify and compare news articles across different characteristics. The results of the analysis are inconclusive.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decentralized Data Markets</title>
<link href="https://hdl.handle.net/1721.1/156981" rel="alternate"/>
<author>
<name>Lu, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/156981</id>
<updated>2024-09-25T03:30:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Decentralized Data Markets
Lu, Charles
Acquiring access to massive amounts of data has become fundamental to state-of-the-art artificial intelligence systems. However, as data value increases, data owners have challenged current norms and practices of data acquisition. Data marketplaces have been promoted to fairly compensate data producers and incentivize greater data sharing. In this thesis, I describe a decentralized model of data markets to overcome privacy concerns in siloed, data-limited domains such as healthcare. I propose two federated techniques to automatically select a subset of data sellers and datapoints for a buyer given some sample data. I also examine the socio-technical implications of emerging data markets for medical data and synthesize ethical principles for medical data marketplaces. Decentralized data markets have the potential to enable new AI economies through more robust, transparent, and participatory data sharing platforms. Through the contributions in this thesis, I hope to make a positive step towards realizing a future where transformative data-enabled technologies such as general-purpose machine learning systems are developed more responsibly and the benefits are distributed more equitably.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Policy, People, and Place Impacts of Mining for the&#13;
Clean Energy Transition in the US</title>
<link href="https://hdl.handle.net/1721.1/156978" rel="alternate"/>
<author>
<name>Randall, Abigail Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/156978</id>
<updated>2024-09-25T03:29:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Policy, People, and Place Impacts of Mining for the&#13;
Clean Energy Transition in the US
Randall, Abigail Marie
To meet the growing demands of the energy transition, we need to rapidly deploy mines to supply the minerals for clean energy technologies. This presents a set of challenges, or tensions, at the energy transition level, policy level, and mine level. This thesis seeks to answer two questions: What are the tensions for mining in the US? How do we decide where to permit these mines given the realities of environmental and community impacts? To address the tensions at the energy transition level, I establish copper, cobalt, nickel, and lithium, or energy transition minerals, as the focus of this thesis. Then, to address policy tensions, I conducted a geospatial analysis and found that 38% of the US’ energy transition mineral resources are on or near difficult to permit lands, with 92.7% of those resources being copper. To understand how these tensions play out in practice, I created three case studies through a series of interviews and pulling public comments. The first case study is of the East Boulder and Stillwater Mines. In this case, stakeholders came together to form a Good Neighbor Agreement, or a legally binding contract between the mine owner and grassroots community organizations. The agreement is an adaptable framework for mine decision making, which shows how stakeholders can work creatively within the tensions of mining for the energy transition. The second case, the Twin Metals Minnesota Case Study, shows how political tensions can introduce risk and uncertainty in the mine permitting process and prevent a mine from moving forward. The third is an Indigenous lands case study centered around the Thacker Pass lithium mine, which illustrates how a tensions framing is critical when the tradeoff framing has historically risked Indigenous sovereignty over their lands. The identified tensions flow into the policy recommendations, which are to: 1. Replicate solutions that maximize gains to stakeholders, 2. Rely on currently underutilized policy options to increase transparency and consolidate review in the permitting process, and 3. Look downstream in the energy transition to learn from newer industries. Taken all together, this thesis tells a story of what types of mines need to be deployed in the US to meet the needs of the clean energy transition, whether and where mines can be deployed under current policy constraints in the US, and how mines are deployed in practice.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unlocking Collective Intelligence in Decentralized AI</title>
<link href="https://hdl.handle.net/1721.1/156977" rel="alternate"/>
<author>
<name>Gupta, Gauri</name>
</author>
<id>https://hdl.handle.net/1721.1/156977</id>
<updated>2024-09-25T03:47:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Unlocking Collective Intelligence in Decentralized AI
Gupta, Gauri
In the current evolving digital landscape, vast repositories of data and knowledge often remain siloed and untapped due to privacy concerns and centralized control. Thus, despite the transformative potential of artificial intelligence, its utilization in societal sectors lags behind other industries. For example in healthcare, data privacy and lack of incentives and trust in the system prevent collaboration on a large scale. This necessitates the development of efficient methods for decentralized learning while preserving privacy to generate wisdom whose quality is on par with the case of data centralization. It involves first identifying and creating essential building blocks that encourage collaboration while preserving the decentralized nature of these critical digital paradigms. A key challenge here is to facilitate collaboration among distrustful, disconnected, and disincentivized entities possessing distinct assets such as data, models, and computation resources. Harnessing the collective wisdom latent within decentralized networks will unlock new avenues for innovation and human collaboration. Therefore, the primary aim of this thesis is to expedite AI adoption in decentralized systems by introducing novel algorithms and systems capable of extracting collective intelligence while preserving privacy. &#13;
&#13;
This thesis addresses the following research questions: First, it delves into methods for training machine learning models collaboratively while simultaneously protecting the privacy of raw data and the proprietary nature of individual models. Second, it explores the coordination mechanisms among system nodes in the absence of a central authority or trusted server to ensure orderly collaboration. Specifically, it answers questions like who should a node talk to. When does random collaboration selection work? Finally, it investigates strategies for conducting crowd-sourced decision-making to obtain population-level predictive results, scaling efficiently to encompass millions of agents.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benchmarking Pavement Environmental Performance Using Data-Driven Modeling and Policy</title>
<link href="https://hdl.handle.net/1721.1/156976" rel="alternate"/>
<author>
<name>Vaidyanath, Varsha</name>
</author>
<id>https://hdl.handle.net/1721.1/156976</id>
<updated>2024-09-25T03:59:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Benchmarking Pavement Environmental Performance Using Data-Driven Modeling and Policy
Vaidyanath, Varsha
Recently, federal and state governments have been implementing policy to reduce embodied emissions coming from the production of materials. However, pavement materials impact emissions throughout the pavement lifecycle, not just during production. This paper addresses how a new pavement evaluation system and policy framework might drive better solutions to reduce carbon emissions, from a climate change standpoint. The main components include: establishing why current pavement rating systems and current policy are not sufficient, performing a data-driven analysis with a grading and scorecard system to assess, compare, and summarizing pavement design quality, and proposing an effective policy framework to implement the system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Mechanical Behavior of a Traditional Japanese Joint for Flexible Structural Design</title>
<link href="https://hdl.handle.net/1721.1/156975" rel="alternate"/>
<author>
<name>Ortea Varela, Ines</name>
</author>
<id>https://hdl.handle.net/1721.1/156975</id>
<updated>2024-09-25T03:13:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring the Mechanical Behavior of a Traditional Japanese Joint for Flexible Structural Design
Ortea Varela, Ines
This research examines the mechanical behavior of a traditional Japanese joint, the Mortised Rabbeted Oblique (MRO) splice. Through computational simulations employing Finite Element Analysis (FEA), the study examines a continuous beam and an unmodified MRO splice, revealing expected behavior in the beam and unexpected tress concentration and displacement asymmetry in the splice. Topology optimization of the splice’s end sections yields iterations with varying volume reductions (50%, 70%, and 90%), showing significant topology differences between the two ends. Subsequently, all iterations were fabricated through 3D printing using PLA and subjected to three-point bending testing. Experimental results confirm the computational findings, demonstrating reduced strength in the MRO splice compared to the continuous beam. A surprising increase in ductility and maximum load resisted by the iterations with 50% and 70% volume reductions is observed. This finding underscores how modifying the end beams significantly influences the overall behavior of the splice mechanism.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Fairness of Artificial Intelligence Models for Radiology Image Classification</title>
<link href="https://hdl.handle.net/1721.1/156974" rel="alternate"/>
<author>
<name>Sandadi, Varsha</name>
</author>
<id>https://hdl.handle.net/1721.1/156974</id>
<updated>2024-09-25T03:26:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluating Fairness of Artificial Intelligence Models for Radiology Image Classification
Sandadi, Varsha
With the increasing prevalence of AI-assisted decision-making in the healthcare domain, evaluating fairness of machine learning models is more central than ever. Measuring the fairness of medical decision-support systems has enormous impacts on patients of different backgrounds and can influence how clinicians make decisions. In this study, we conduct a fairness analysis on the top 8-10 performing machine learning and artificial intelligence models from the Radiological Society of North America cervical spine fracture detection challenge and abdominal trauma detection challenge. Seven metrics are used for a more comprehensive assessment on fairness. Our findings indicate that cervical spine fracture detection models exhibit overall fairness, while abdominal trauma detection models demonstrate some unfairness in specific injury regions, possibly due to limited sample size. We also explore the performance of top models from the intracranial hemorrhage detection challenge across clinician-labeled "easy," "medium," and "hard" cases, revealing a lower accuracy rate on hard cases. This study underscores the need for additional model testing and comprehensive data representation to ensure fairness before real-world deployment in healthcare systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Sim-to-Real Robot Parkour from RGB Images</title>
<link href="https://hdl.handle.net/1721.1/156972" rel="alternate"/>
<author>
<name>Jenkins, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/156972</id>
<updated>2024-09-25T03:26:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning Sim-to-Real Robot Parkour from RGB Images
Jenkins, Andrew
Advancements in quadrupedal robot locomotion have yielded impressive results, achieving dynamic maneuvers like climbing, ducking, and jumping. These successes are largely attributed to depth-based visual locomotion policies, known for their robust transferability between simulated and real-world environments (sim-to-real). However, depth information inherently lacks the semantic information present in RGB images. This thesis investigates the application of an RGB visual locomotion policy for navigating complex environments, specifically focusing on extreme parkour terrain. While RGB data offers a deeper understanding of the scene through semantic cues, it presents challenges in sim-to-real transfer due to large domain gaps. This work proposes a novel approach for training an RGB parkour policy and demonstrates that it achieves performance comparable to depth-based approaches in simulation. Furthermore, we successfully deploy and evaluate our RGB policy on real-world parkour obstacles, signifying its potential for practical applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Multi-Stage Machine Learning Pipelines for Extracting Structured Key-Value Pairs from Documents</title>
<link href="https://hdl.handle.net/1721.1/156971" rel="alternate"/>
<author>
<name>Pyo, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/156971</id>
<updated>2024-09-25T03:28:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Leveraging Multi-Stage Machine Learning Pipelines for Extracting Structured Key-Value Pairs from Documents
Pyo, Bryan
In the rapidly growing field of information extraction, the ability to automatically and accurately extract structured data from sources has grown in importance across several industries. This need has arisen largely due to the vast quantity of data that is currently available and still being actively collected by these industries for various purposes. In a world where data has grown greatly in quantity and importance, the ability to parse this data into usable information has grown to become an even more essential endeavor. Although information extraction has traditionally been a relatively labor-intensive task, with the rising sophistication and applicability of machine learning and computer-aided document analysis, automatic and more generalized methods of extracting relevant data from documents have become a major focus of research. This thesis discusses several pipelines that have been developed to extract data in the form of key-value pairs from specification sheets describing mechanical parts achieving accuracies ranging from 80% to 100% depending on the pipeline and the target documents and key-value pairs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Achieving Secure and Performant Databases with Minimal Resource Overhead</title>
<link href="https://hdl.handle.net/1721.1/156970" rel="alternate"/>
<author>
<name>Lim, Darren</name>
</author>
<id>https://hdl.handle.net/1721.1/156970</id>
<updated>2024-09-25T03:05:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Achieving Secure and Performant Databases with Minimal Resource Overhead
Lim, Darren
Modern cloud databases run in virtualized environments, which are typically implemented with Linux virtual machines (VMs). However, this poses two main risks. Typically, trusted database code is run alongside stored procedure code, which means that user-inputted stored procedure code can pose a security risk to the database and data itself, if the code contains vulnerabilities. Additionally, since Linux has such a large codebase, Linux-based VMs are subject to complex latency concerns and also a large attack surface. Using a low-level shared memory protocol, it is possible to create a secure and performant communication channel between a database VM and the VMs of its stored procedures. This protects the database from vulnerabilities in the stored procedure code. Furthermore, by using unikernels instead of Linux VMs, the machines running the VMs can minimize the CPU/memory overhead per VM while also improving security for the DMBS. Overall, these changes allow cloud-hosted machines to more efficiently utilize resources.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Undercutting Attacks: A Study on Mining and Transaction Fee Behavior</title>
<link href="https://hdl.handle.net/1721.1/156969" rel="alternate"/>
<author>
<name>Bao, Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/156969</id>
<updated>2024-09-25T03:40:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mitigating Undercutting Attacks: A Study on Mining and Transaction Fee Behavior
Bao, Claire
With block rewards dwindling in Bitcoin, a miner’s revenue will become increasingly reliant on transaction fees. However, these transaction fees are highly variable, which could result in undercutting attacks occurring. Undercutting attacks are when miners intentionally fork the blockchain in an attempt to steal transactions from an already-mined block. These attacks could cause repeated forking of the blockchain, thereby rendering Bitcoin unstable and less secure long-term. The original paper by Carlsten et. al. proposing these attacks made assumptions about the future mining environment. For instance, they assumed that block size limits were large relative to the number of transactions and that all transactions had the same fee. &#13;
&#13;
This thesis aims to examine whether undercutting attacks would still be a threat under different mining dynamics. Specifically, we examine two important mempool characteristics that have changed since the original paper was written: the block size limit and the fee gradient. By investigating what happens as these characteristics and factors change, our research is able to not only generate a holistic view of whether undercutting attacks are a threat for a wide variety of possible mempool dynamics, but it also provides guidelines on what range each of these measurable characteristics must fall within in order for the blockchain to be secure and stable long-term. Our research found that the blockchain is safe from undercutting attacks when the block size limit is small relative to the number of transactions, but the blockchain becomes more susceptible to undercutting attacks if transactions with much higher fees enter the mempool infrequently even for smaller block size limits. Moreover, we extend the logic of undercutting attacks from the original paper to show that, if the mempool dynamics are such that the undercutting occurs long-term, the tangible impact on users is that very little progress will be made as fully rational miners will end up only including one transaction per block, regardless of the total amount of available transactions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long-range Genomics Benchmark Technology and More</title>
<link href="https://hdl.handle.net/1721.1/156968" rel="alternate"/>
<author>
<name>Polen, McKinley</name>
</author>
<id>https://hdl.handle.net/1721.1/156968</id>
<updated>2024-09-25T03:44:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Long-range Genomics Benchmark Technology and More
Polen, McKinley
The transformer architecture has emerged as a popular choice in various domains, owing to its ability to capture long-range dependencies and parallel processing capabilities. In the context of genomics, where dependencies often span over 100,000 base pairs, the quadratic computational complexity of the attention mechanism, a core feature of the transformer architecture, poses a significant bottleneck. With the goal of creating a genomics foundation model (FM), this paper aims to address challenges associated long range dependencies in genomics. Our survey encompasses modifications to the attention mechanism, the creation of a genomics long range benchmark (GLRB), and the evaluation of various transformer and other non-transformer architectures. These efforts collectively develop the groundwork supporting the development of a robust genomics foundation model, opening new possibilities for genomics research and applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Modal Protein Function Prediction using a Joint Embedding Space from Two Graph Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/156967" rel="alternate"/>
<author>
<name>Tysinger, Emma P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156967</id>
<updated>2024-09-25T03:55:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-Modal Protein Function Prediction using a Joint Embedding Space from Two Graph Neural Networks
Tysinger, Emma P.
In bioinformatics and proteomics, determining protein functions experimentally is expensive and slow. There’s a growing need for precise and quick computational prediction methods, filling the gap between sequence discovery and functional understanding. Over recent years there has been an influx of deep-learning protein folding algorithms used for predicting function by transfer learning. Protein function is only partially captured by each of a large number of modalities including structure, however, in isolation they only give us a partial understanding of function. Uniting these is an important step to understanding function more holistically. We present a multi-modal framework using two graph neural networks to infer a joint embedding space that captures many properties of a protein including structure, disease associations, drug interactions, protein interactions, biological processes and more. We evaluate the embedding space on downstream prediction tasks including enzyme commission (EC) numbers and gene ontology (GO) terms. Experimental results on protein function prediction, as well as a qualitative visual analysis of the protein embedding space show that our framework is able to successfully capture both structure and biomedical context of proteins, and outperforms structure-only based encoders.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inertial Navigation System Drift Reduction Using Scientific Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/156966" rel="alternate"/>
<author>
<name>McManus, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/156966</id>
<updated>2024-09-25T03:49:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inertial Navigation System Drift Reduction Using Scientific Machine Learning
McManus, Matthew
Inertial Navigation Systems (INS) are crucial for accurate navigation in GPS-denied environments, but they suffer from drift errors that accumulate over time. This thesis introduces Scientific Machine Learning (SciML) as an innovative approach to mitigate INS drift by integrating physical models with machine learning algorithms. The proposed SciML architecture leverages neural networks to learn complex error patterns and relationships from simulated IMU data, outperforming conventional techniques like Kalman filtering. Utilizing a simulation-focused approach with the Julia programming language and the HighPerformance Inertial Navigation Development Repository (HIDR) library, the research generates realistic datasets encompassing diverse trajectories, sensor errors, and operational conditions. The SciML methodology incorporates data generation, INS mechanization, error modeling using neural networks, and a filtering framework that integrates the Extended Kalman Filter (EKF) with batch filtering techniques. Experimental results demonstrate the superior performance of the SciML-based INS in reducing position, velocity, and attitude errors compared to a baseline Kalman filter. This pioneering approach of fusing SciML with INS physical models holds promise for revolutionizing drift error mitigation and advancing the field of navigation systems, paving the way for more accurate, reliable, and resilient navigation in GPS-denied environments, with potential applications in aviation, robotics, and autonomous vehicles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Interfaces for Augmenting Episodic Memory</title>
<link href="https://hdl.handle.net/1721.1/156965" rel="alternate"/>
<author>
<name>Zulfikar, Wazeer Deen</name>
</author>
<id>https://hdl.handle.net/1721.1/156965</id>
<updated>2024-09-25T03:29:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">AI Interfaces for Augmenting Episodic Memory
Zulfikar, Wazeer Deen
Episodic memory, the memory of personal experiences, is a core component of human cognition. It functions within the neural substrate to store progress towards personal goals. Thus, it influences human behavior by enriching social interactions, forming a personal narrative, and facilitating personal growth. With the rise of challenges such as poor sleep, aging and dementia, and fragmented attention, people experience difficulties with episodic memory retrieval. These difficulties range from momentary lapses such as forgetting previous interactions during conversations, to recalling multiple events during reminiscing and decision-making. &#13;
&#13;
In this work, we explore artificially intelligent (AI) systems that augment episodic memory by enabling people to interact with their memories effectively. We design, develop, and evaluate two systems: (i) Memoro, a wearable audio-based memory assistant that presents concise suggestions in real-time while minimizing disruption to the user’s primary task, and (ii) Resonance, a web-based reflective memory assistant that offers actionable suggestions to help users savor their past, present, and future experiences for mental health benefits. By conducting an in-person user study for Memoro and a longitudinal online user study for Resonance, we investigate the effects of these systems on users, measure their technical efficacy, and gather feedback on user experiences.  Recent advances in artificial intelligence offer novel opportunities to enhance episodic memory. Therefore, exploring interfaces that seamlessly integrate with human behavior is crucial to ensure that AI-based systems enrich everyday experiences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Embedding engineering intuition into computational design through interactive topology optimization</title>
<link href="https://hdl.handle.net/1721.1/156964" rel="alternate"/>
<author>
<name>Schiffer, Gillian</name>
</author>
<id>https://hdl.handle.net/1721.1/156964</id>
<updated>2024-09-25T03:01:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Embedding engineering intuition into computational design through interactive topology optimization
Schiffer, Gillian
With increasing pressure to generate low environmental impact designs, topology optimization presents a flexible, material efficient solution. Topology optimization is a computational design method that produces lightweight, high performing designs uniquely suited to a user’s objective function and constraints. However, there exist major obstacles to topology optimization’s widespread use, including increased complexity and computational time for advanced, nonlinear optimization formulations such as buckling or stress, lack of geometric control, and difficulty manufacturing. Interactive topology optimization algorithms overcome these obstacles by prompting users to directly modify the geometry of the design as the optimization runs. By embedding their engineering intuition into the design, users address concerns for complex failure modes, manufacturability, or alternative engineering performance metrics. This work presents two interactive approaches: HiTop 2.0 which empowers users to selectively enforce minimum and/or maximum solid and/or void feature size controls, and interactive infill topology optimization which incorporates user drawn infill patterns into regions of the optimized design. The interactive methods are demonstrated on numerical 2D examples, HiTop 2.0 is extended to a numerical 3D example, and interactive infill is experimentally validated with 2.5D additively manufactured test beams.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Electricity Distribution Network Tariffs for Beneficial Electrification</title>
<link href="https://hdl.handle.net/1721.1/156963" rel="alternate"/>
<author>
<name>Turk, Graham</name>
</author>
<id>https://hdl.handle.net/1721.1/156963</id>
<updated>2024-09-25T03:24:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing Electricity Distribution Network Tariffs for Beneficial Electrification
Turk, Graham
Decarbonizing the transportation and residential building sectors will require rapid electrification through the uptake of electric vehicles (EVs) and cold climate heat pumps (CCHPs), respectively. There is broad consensus that the flat volumetric electricity tariffs currently in place for residential customers in most of the US discourage electrification and do not reflect the underlying marginal costs of electricity delivery. Under flat volumetric tariffs, utilities are projecting sharp rises in distribution-level peak demand, which will necessitate network upgrades whose costs are recovered from all grid users. Alternative rate designs can help mitigate the need for these upgrades by shifting new demand away from peak periods. However, there is an emerging narrative that electricity tariff design is a zero-sum game: regulators can either protect vulnerable households or encourage electrification, but not both. In this thesis, we challenge that perception by asking whether well-designed distribution network tariffs can deliver a win-win in the long run, reducing operating costs for EVs and/or CCHPs and average network costs for households that cannot yet afford to electrify. We answer this question by running a series of bottom-up optimizations to simulate household’s responses to alternative network tariff designs in two distinct geographies, then assessing cost impacts on different household groups. We use open-source data on household electricity consumption and travel behavior. We find that beyond very low adoption levels, time-of-use per-kWh network tariffs, which several states have adopted as the default, perform poorly on all metrics and lead to large increases in local peak demand. Per-kW capacity tariffs (subscription and demand charges) are effective at mitigating EV-driven peaks, especially when paired with TOU energy tariffs. We recommend that regulators separate network charges from energy charges and introduce a per-kW subscription network tariff to collect a portion of the network revenue requirement. This approach will reduce the total cost of ownership of electrified devices while mitigating the network upgrades needed to maintain reliability. Our recommendations offer a path towards rapid electrification that benefits all grid users.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Misalignment in Language Model Deployments through Context-Specific Evaluations</title>
<link href="https://hdl.handle.net/1721.1/156962" rel="alternate"/>
<author>
<name>Soni, Prajna</name>
</author>
<id>https://hdl.handle.net/1721.1/156962</id>
<updated>2024-09-25T03:38:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Addressing Misalignment in Language Model Deployments through Context-Specific Evaluations
Soni, Prajna
Language model-based applications are increasingly being deployed in the real world across a variety of contexts. While their rapid success has realized benefits for society, ensuring that they are trained to perform according to societal values and expectations is imperative given their potential to shape societal values, norms, and power dynamics. Evaluation plays a key role in language model (LM) alignment and policy-making. Presently, LM alignment and evaluations are based on developer- and researcher-prescribed attributes, with many benchmarks focusing on performance as dictated by generalized or primarily Western datasets that may not accurately reflect the deployment context. This results in an inevitable misalignment where a model trained on human preference proxies in context A is deployed in context B. &#13;
&#13;
Existing evaluation measures and alignment techniques are heavily biased towards the values and perspectives of model developers. In this thesis, I argue that in order to ensure that alignment efforts are specific to their deployment contexts, it is necessary and feasible to design open-ended and participatory methods to elicit a broader range of context-specific axes. I demonstrate the viability of this through, CALMA, a non-prescriptive and grounded participatory process that successfully elicits distinct and context-specific alignment axes for evaluation datasets through in-context studies with two different communities. I further explore the ways in which broader participation can enable more effective adaptive AI regulation due to the crucial role of evaluations in addressing the technology-policy lag.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Garabit Viaduct: A Historical and Structural Study</title>
<link href="https://hdl.handle.net/1721.1/156961" rel="alternate"/>
<author>
<name>Harlin, Anne-Sixtine</name>
</author>
<id>https://hdl.handle.net/1721.1/156961</id>
<updated>2024-09-25T03:44:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Garabit Viaduct: A Historical and Structural Study
Harlin, Anne-Sixtine
This thesis investigates the Garabit Viaduct, providing a historical study and structural analysis of its truss arch. It aims to unravel the ingenuity behind the arch's elegant shape and design process. By examining historical plans and the memoirs of engineers such as Gustave Eiffel and Léon Boyer, this research uncovers the evolution of the viaduct's design and shape, revealing that the geometry of the arch was form-found using graphic statics. This study sheds light on the structural design hypotheses employed by Gustave Eiffel and Maurice Koechlin in sizing the members, providing insights into design practices of the late 19th century. Additionally, the study of the primary source documents left behind by the engineers suggests the method used for the arch's design may have influenced the shaping of the supporting piers, opening avenues for future research into the broader implications for Eiffel's later iconic tower.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peripheral Nervous System Modulation with Wireless Cellular Sized Freestanding Injectable Devices</title>
<link href="https://hdl.handle.net/1721.1/156960" rel="alternate"/>
<author>
<name>Patel, Preet</name>
</author>
<id>https://hdl.handle.net/1721.1/156960</id>
<updated>2024-09-25T03:35:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Peripheral Nervous System Modulation with Wireless Cellular Sized Freestanding Injectable Devices
Patel, Preet
Designing novel neural interfaces is essential for various medical applications, scientific research, and human augmentation. One of the foundations of neural interface and bioelectronic medicine is the electrical stimulation of excitable cells, to interface the body with electronics and treat a variety of diseases. Current technologies, while efficacious, are limited by their bulkiness, require highly invasive surgeries, are unable to target at single-cell level resolution and are prone to foreign body reactions. Optogenetics can address these issues but fundamentally requires genetic modifications which makes it difficult to implement in-vivo and has issues of muscle atrophy and toxicity specifically in the Peripheral Nervous System (PNS).&#13;
&#13;
This work aims to advance bioelectronic medicine by developing efficient, wireless, cellular- sized electronic devices that can be administered in a drug-like fashion. These innovative, substrate-free nanoelectronic devices, termed injectable electronics, can be activated, and controlled using near-infrared (NIR) light, enabling minimally invasive, targeted neuromodulation deep within the peripheral nervous system (PNS). By overcoming the limitations of current implantable devices, this groundbreaking approach has the potential to transform the way we diagnose and treat a wide range of neurological disorders.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a Prime Factorization of Proteins</title>
<link href="https://hdl.handle.net/1721.1/156959" rel="alternate"/>
<author>
<name>Radev, Simeon</name>
</author>
<id>https://hdl.handle.net/1721.1/156959</id>
<updated>2024-09-25T03:14:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards a Prime Factorization of Proteins
Radev, Simeon
A classical problem of machine learning is the interpretability of a model’s latent information processing. This is particularly the case in the richly complex field of protein analysis, whereby unique and novel insights into the structural organization of proteins can help illuminate their functional space, and in particular lead toward a factorization of the structural space into a set of motif building blocks, which completely span this universe. This thesis creates a new inference interface for performing such analysis, by leveraging the sequential learning process of a neural autoencoder to construct a decomposition of proteins as a hierarchical sequence of embedded representation vectors. The further development of this work could lead to a greater understanding of the organizational complexity of natural phenomena, and in particular, as it relates to the uniquely complex relationship between protein structures and their function.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Deep Learning Algorithms in Predicting Seismic Response of a Reinforced Concrete Structure</title>
<link href="https://hdl.handle.net/1721.1/156958" rel="alternate"/>
<author>
<name>Morgan, Jacob A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156958</id>
<updated>2024-09-25T03:58:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluation of Deep Learning Algorithms in Predicting Seismic Response of a Reinforced Concrete Structure
Morgan, Jacob A.
This thesis presents an evaluation of the performance of three well-established deep learning algorithms in predicting the response of a six-story instrumented reinforced concrete hotel in California to seismic excitation. Given the increasing availability of strong-motion data and expanded usage of deep learning in structural health monitoring, this thesis seeks to evaluate the predictions of purely data-driven and physics-informed architectures using processed instrumentation data in order to more accurately predict structural response for use in structural health monitoring and performance-based design applications.&#13;
&#13;
By employing a variety of results metrics previously used in the literature, including correlation coefficients, normalized error distributions, and peak errors, this thesis examines different components of the models’ capabilities to learn more about patterns in the data learned by the computational mechanisms of each architecture, and exploring the feasibility of a generalized approach for further application in structural response prediction. &#13;
&#13;
Findings from the work show the data-driven Long Short-Term Memory (LSTM) network performing the most accurately, but not consistently outperforming the other algorithms. Some trends in the data could be evidence of how different architectures may be better equipped in predicting different mode shapes and frequency contents. For example, the data-driven and physics-guided LSTM models predicted the third floor’s response more accurately than the roof, whereas the physics-guided convolutional neural network (CNN) was the opposite, showing a contrast between the two base architectures. This thesis also contributes to this growing field by documenting the experimental setup in detail to allow for the replication of results and for the facilitation of future application by structural engineers.&#13;
&#13;
As structural engineering research in deep learning continues to gain popularity, this thesis provides an experimental basis of a case study that can be followed and replicated to motivate future experimentation, as well as offering compelling different directions that future work could be directed to further the usage of deep learning in structural response prediction and structural health monitoring as a whole.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Psychometric Tool to Measure the Emotional Impact of Visual Content</title>
<link href="https://hdl.handle.net/1721.1/156957" rel="alternate"/>
<author>
<name>Cucu, Theodor</name>
</author>
<id>https://hdl.handle.net/1721.1/156957</id>
<updated>2024-09-25T03:52:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing a Psychometric Tool to Measure the Emotional Impact of Visual Content
Cucu, Theodor
This thesis investigates the human valence response to sequences of visual images. We f irst use crowd-sourcing and a novel nine-point psychometric scale to estimate human valence responses to individual images from the OASIS image set with high reliability (split-half Spearman rank-correlation ρ = 0.95). In a separate group of human participants, we then estimate valence responses following short, random sequences of those images (of length ≤ 10). Our key finding is that these sequence-contingent valence responses can be closely predicted by a simple linear combination of the estimated human valence responses to individual images (held-out ρ = 0.94). The combination weights are largest for the final image in the sequence; intuitively, this means the final image by itself can make predictions with high goodness-of-fit (ρ = 0.87). In summary, this research shows new evidence for a simple relationship between valence responses to individual images and valence responses to image sequences, with implications for future studies and practical applications in psychological assessment and beyond.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-omic Analysis of Neurodegeneration in Alzheimer’s Disease and Related Dementias</title>
<link href="https://hdl.handle.net/1721.1/156956" rel="alternate"/>
<author>
<name>Howe, Stephanie Pui-kay</name>
</author>
<id>https://hdl.handle.net/1721.1/156956</id>
<updated>2024-09-25T03:32:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-omic Analysis of Neurodegeneration in Alzheimer’s Disease and Related Dementias
Howe, Stephanie Pui-kay
The advent of single cell sequencing has revolutionized the granularity at which we can understand genetics and underlying cell biology. This enables us to analyze both the transcriptome and epigenome of various tissues, offering new insights into the molecular mechanisms that underlie disease such as neurodegeneration. This study focuses on neurodegenerative disease at the single cell resolution of the following proteinopathies: Alzheimer’s Disease (AD), Frontotemporal Dementia (FTD), Lewy Body Dementia (LBD), and Vascular Contributions to Cognitive Impairment and Dementia (VCID). We utilize both single-cell RNA sequencing (scRNA-seq) and single-cell ATAC sequencing (scATAC-seq) to perform a joint analysis of these conditions, examining both modalities holistically. Our research characterizes a multi-omic data set comprising 2,820,565 cells from 491 samples of prefrontal cortex across the aforementioned conditions, with all samples subjected to scRNA-seq and 63 to scATAC-seq. Leveraging this data, we conduct a multi-omic analysis of Alzheimer’s Disease and Related Dementias (ADRD) by exploring differences in the transcriptome and epigenomic erosion profile across conditions, shedding light on the intricacies of cortical aging. Ultimately, we identify potential molecular and genetic markers that drive the heterogeneous relationship between pathology, epigenetic erosion, and cognition in individuals affected by these conditions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Cortex-Hippocampus Interactions During Language Processing</title>
<link href="https://hdl.handle.net/1721.1/156955" rel="alternate"/>
<author>
<name>Lee, Jiachen Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/156955</id>
<updated>2024-09-25T03:29:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Characterizing Cortex-Hippocampus Interactions During Language Processing
Lee, Jiachen Elizabeth
The role of medial temporal lobe structures, including that of the hippocampus, in language processing, remain largely unknown. In patients with hippocampal damage, language is left largely intact [Vargha-Khadem et al., 1997], suggesting that the hippocampus is likely not necessary for language processing. Recent evidence, however, has shown that the hippocampus may serve functions outside its traditional roles in episodic memory and spatial navigation, and may generally aid in the encoding of relationships across time and space [Cohen and Eichenbaum, 1993]. Hence, the hippocampus may be involved in processes that are also implicated for language processing. Indeed, some patients with hippocampal damage, show deficits in resolving ambiguous discourse referents [Rubin et al., 2011] [Duff et al., 2011], reconstructing narratives [Race et al., 2011a], and display limited linguistic flexibility in engaging "verbal play" [Duff et al., 2009]. Here we leverage a large-scale fMRI dataset (n=790) and identify a region that responds to meaningful language in the anterior portion of the left hippocampus. We then characterize its response profile and show that it is responsive to semantically meaningful material but is not engaged during cognitively demanding spatial working memory and arithmetic tasks. Next, we examine the relationship between hippocampal and cortical language processing, starting with the neural correlates of word- and sentence- memorability in both the hippocampal and cortical language areas. Lastly, we leverage an encoding-model-guided procedure to search through a large set of sentences to identify those that are predicted to maximally differentiate responses in the cortical and hippocampal language areas. We find that cortical language areas are largely driven by surprisal, while hippocampal language areas display preferences towards more imageable and concrete sentences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accurate and Fast Approximate Graph Mining at Scale</title>
<link href="https://hdl.handle.net/1721.1/156954" rel="alternate"/>
<author>
<name>Arpaci-Dusseau, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/156954</id>
<updated>2024-09-25T03:53:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Accurate and Fast Approximate Graph Mining at Scale
Arpaci-Dusseau, Anna
Approximate graph pattern mining (A-GPM) is an important data analysis tool for numerous graph-based applications. There exist sampling-based A-GPM systems to provide automation and generalization over a wide variety of use cases. Despite improved usability, there are two major obstacles that prevent existing A-GPM systems being adopted in practice. First, the termination mechanism that decides when to terminate sampling lacks theoretical backup on confidence, and performs significantly unstable and thus slow in practice. Second, they particularly suffer poor performance when dealing with the “needle-in-the-hay” cases, because a huge number of samples are required to converge, given the extremely low hit rate of their lazy-pruning strategy and fixed sampling schemes. We build ScaleGPM, an accurate and fast A-GPM system that removes the two obstacles. First, we propose a novel on-the-fly convergence detection mechanism to achieve stable termination and provide theoretical guarantee on the confidence, with negligible online overhead. Second, we propose two techniques to deal with the “needle-in-the-hay” problem, eager-verify and hybrid sampling. Our eager-verify method drastically improves sampling hit rate by pruning unpromising candidates as early as possible. Hybrid sampling further improves performance by automatically choosing the better scheme between fine-grained and coarse-grained sampling schemes. Experiments show that our online convergence detection mechanism can precisely detect convergence, and results in stable and rapid termination with theoretically guaranteed confidence. We also show the effectiveness of eager-verify in improving the hit rate, and the scheme-selection mechanism in correctly choosing the better scheme for various cases. Overall, ScaleGPM achieves an geomean average of 565× (up to 610,169×) speedup over the state-of-the-art A-GPM system, Arya. ScaleGPM is also four orders of magnitude faster than state-of-the-art exact GPM system, GraphZero. In particular, ScaleGPM handles billion-scale graphs in seconds, where existing systems either run out of memory or fail to complete in hours.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation on ImageNet Remaining Errors with TRAK</title>
<link href="https://hdl.handle.net/1721.1/156953" rel="alternate"/>
<author>
<name>Ma, Lingyi</name>
</author>
<id>https://hdl.handle.net/1721.1/156953</id>
<updated>2024-09-25T03:12:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigation on ImageNet Remaining Errors with TRAK
Ma, Lingyi
The Imagenet dataset is an important benchmark and test bed for computer vision models. Two of its most important characteristics are the size and difficulty, which were what motivated the breakthrough deep learning model Alexnet a decade ago. As researches progress and computation power grows, the best models nowadays can achieve accuracy as high as 90% on Imagenet. With such high accuracy, model predictions are usually of high precision and the causes of this long tail of error are unknown. Many studies have suggested that reassessing Imagenet, a nontrivial amount of label error and noise is found and effort had been made to fix this label noise in the test set, mainly through manual review. However, not many studies have dived into fixing labels for the training set, largely due to its large scale. The proposed thesis aims to understand the remaining errors that models are still making on the ImageNet dataset and investigate the labeling problems in the Imagenet training set, utilizing TRAK- a recently developed efficient data attribution method to help identify problematic images among the 1.4 million images in Imagenet training set.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stakeholder views on the uptake of sustainable and responsible nickel mining and processing supply chains for electric vehicles in Indonesia</title>
<link href="https://hdl.handle.net/1721.1/156950" rel="alternate"/>
<author>
<name>Malik, Rameen Hayat</name>
</author>
<id>https://hdl.handle.net/1721.1/156950</id>
<updated>2024-09-25T04:00:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stakeholder views on the uptake of sustainable and responsible nickel mining and processing supply chains for electric vehicles in Indonesia
Malik, Rameen Hayat
This thesis explores the evolution and contemporary challenges of Indonesia’s nickel industry within the context of the electric vehicle (EV) supply chain. It critically examines the sustainability and ethical considerations as Indonesia positions itself as a key player in the global transition to clean energy. The study provides a comprehensive analysis of Indonesia’s strategic moves to enhance the value derived from its extensive nickel reserves, underscored by the implementation of policies such as the raw export ban aimed at fostering local processing industries. Central to this examination is the dual role of nickel as both a critical and contentious resource, reflecting on its classification as a critical mineral by multiple countries due to its indispensability in EV battery production and the substantial environmental and social challenges associated with its extraction and processing. Employing a policy mobility framework, this thesis navigates the trans-local dynamics of policy making in Indonesia, juxtaposing these with global economy wide pursuits of transportation decarbonization via the EV industry. Through a mixed-methods approach, combining literature review, stakeholder interviews, and field observations, the research unveils the multifaceted perspectives of various stakeholders including industrial entities, government bodies, and civil society organizations. The findings highlight the significant influence of international investment, mainly Chinese investment in shaping Indonesia’s nickel processing capabilities, while also noting the ethical dilemmas and environmental hazards posed by the industry’s expansion. Indonesia’s strategy to escalate value addition locally is critically assessed, revealing both progress and persistent ethical and environmental challenges. Strategies are proposed to leverage the myriad of resources, influence and authority of actors along the EV supply chain to spur the growth of sustainable and responsible supply of Indonesian nickel. The thesis contributes to the discourse on sustainable mineral supply chains by proposing policy recommendations aimed at reconciling economic ambitions with environmental and social imperatives. These recommendations advocate for enhanced governance structures, transparent supply chains, and international collaboration to achieve ethical sourcing practices. The research underscores the need for a balanced approach that not only caters to the economic aspirations of resource-rich nations but also adheres to global sustainability standards.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Impact of the Inflation Reduction Act and Hydrogen Storage in Salt Caverns in the Mid-Atlantic United States</title>
<link href="https://hdl.handle.net/1721.1/156949" rel="alternate"/>
<author>
<name>Armstrong, Les Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/156949</id>
<updated>2024-09-25T03:11:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling the Impact of the Inflation Reduction Act and Hydrogen Storage in Salt Caverns in the Mid-Atlantic United States
Armstrong, Les Gabriel
Hydrogen is widely understood to be critical for decarbonizing hard-to-abate sectors like heavy industry, long-distance transportation, as well as balancing a variable renewable energy dominated power grid.&#13;
In this thesis, we first propose a methodology for evaluating the potential for hydrogen storage in geological salt resources. Our results show that the Michigan and Appalachian Salina basins are promising locations for hydrogen storage in salt caverns. After applying a coarse techno-economic filter, the storage potential of the remaining high value caverns is 9.7 × 108 metric tons of H2 or 32.4 PWh in Michigan and 1.6 × 107 metric tons of H2 or 0.54 PWh in the Appalachian region.&#13;
We then perform a techno-economic analysis on these salt cavern resources which we utilize as hydrogen storage options in Macro, an open source energy system optimization model that couples the power, hydrogen, and carbon sectors. We then analyze the impact of the Inflation Reduction Act and the presence of salt caverns on the United States Mid- Atlantic region in the year 2035. We find that salt caverns do not have a significant impact on the overall coupled energy system dynamics unless we force a 100% decarbonization constraint. In addition, we also uncovered a perverse behavior induced by the IRA’s hydrogen production tax credit within the model. Further work is required to understand whether this behavior is likely in practice or can be attributed to difficulties modeling real world interactions and internal frictions between actors in the energy sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effect of turbine motion on floating off shore wind turbine aerodynamics</title>
<link href="https://hdl.handle.net/1721.1/156948" rel="alternate"/>
<author>
<name>Tignol, Bo Junior</name>
</author>
<id>https://hdl.handle.net/1721.1/156948</id>
<updated>2024-09-25T03:30:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effect of turbine motion on floating off shore wind turbine aerodynamics
Tignol, Bo Junior
The quest to meet renewable energy targets and anticipate future energy consumption growth has driven the continuous development of wind energy system design and its push for larger and more efficient wind turbines, especially in offshore environments. Floating Offshore Wind Turbines (FOWTs) are a promising alternative for capturing high wind energy potential in more difficult offshore environments that pose challenges to traditional bottom-fixed turbines. Yet, the understanding of FOWT behaviour under dynamic floating motion-induced translational and rotational degrees of freedom remains a significant and important challenge. Indeed, there is considerable inconsistency with regards to the interpretation of FOWT behaviour under floating motion. This thesis aims to evaluate the influence of surge and pitch motions on the aerodynamic behaviour of FOWT through the interpretation of several modeling approaches and their differences. Various surge and pitch amplitude and frequency ranges are considered, and two large eddy simulation (LES) approaches, along with a simplified analytical model, are assessed with regards to their predictions of the axial induction, induced velocity, power production, and wake velocities. It was found that there is generally close agreement between surging inflow and surging actuator disk LES simulations, with a difference in time-averaged power production no larger than 1.8% for any of the investigated cases, confirming the hypothesized similarity between these two methods to simulate turbines in kinematic motion. Furthermore, it was found that, although the simplified analytical model performed well at low frequency surge motions, it exhibited increasing underprediction of power production with increasing frequency. As for the pitch cases, the model exhibited low error compared to LES simulations across the amplitudes investigated. Moreover, unlike the variability in the surging data, the pitching LES exhibited less variations with surging frequencies, which suggests that the analytical model maintains better predictive capability across a diverse range of pitching motions. Looking forward, the results of this study suggest the need for continued in-depth evaluation of additional LES parameters such as the tip-speed ratio and thrust coefficient, along with validation with the development of an analytical model that can capture the observed frequency dependence. Finally, future work should also further hone in on the inclusion of LES at different freestream wind and surge and pitch combinations to explore the potential formation of complex wake states, as well as the investigation of in-sync and out-of-sync joint pitch-and-surge cases to explore the occurrence of any nonlinear aerodynamic interactions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PCBleed: Fuzzing for CPU Bugs Through Use of Performance Counters</title>
<link href="https://hdl.handle.net/1721.1/156944" rel="alternate"/>
<author>
<name>Muradyan, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/156944</id>
<updated>2024-09-25T03:37:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">PCBleed: Fuzzing for CPU Bugs Through Use of Performance Counters
Muradyan, Natalie
In recent years, the increasing complexity of hardware designs has given rise to a growing array of vulnerabilities and security threats, as exemplified by instances such as Spectre, Microarchitectural Data Sampling, and Zenbleed. The inherent permanence of hardware vulnerabilities poses a significant threat, making early identification crucial for preventing security compromises once a device is manufactured. However, identifying hardware vulnerabilities is challenging due to the large and complex design of current CPUs, resulting in a substantial search space and numerous unknowns. This thesis proposes leveraging software fuzzing methods for hardware testing, focusing on the automated generation of instruction sequences that reveal hardware vulnerabilities. Unlike software fuzzing, hardware fuzzing faces challenges such as a lack of visibility into the microarchitectural processor states and difficulty in directing the search for test case generation. To address these challenges, this research draws inspiration from software fuzzers that use insights into the internal workings of the software for effective test case generation. We propose PCBleed, a coverage-guided mutational hardware fuzzer that enhances CPU fuzzing by using hardware performance counters as insight into the CPU’s behavior to improve test case generation. Since performance counters measure architectural events relevant to CPU performance, they provide insights that we use to estimate coverage, marking instruction sequences as novel. This approach aims to maximize the functionality exercised during hardware fuzzing, ultimately identifying interesting, bug-triggering behavior. Our methodology is distinctive, utilizing performance counters for hardware fuzzing enhancement, and aligns with recent research findings that highlight the versatility of performance counters in debugging, dynamic software profiling, CPU power modeling, malware detection, and cache side-channel attack detection. By incorporating performance counters into the hardware testing paradigm, this research seeks to contribute to the proactive fortification of hardware security through insightful analyses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Image Recognition Difficulty in Artificial and Biological Visual Processing</title>
<link href="https://hdl.handle.net/1721.1/156943" rel="alternate"/>
<author>
<name>Cummings, Jesse E.</name>
</author>
<id>https://hdl.handle.net/1721.1/156943</id>
<updated>2024-09-25T03:30:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Characterizing Image Recognition Difficulty in Artificial and Biological Visual Processing
Cummings, Jesse E.
In recent years, computational models trained to do object recognition have become increasingly capable. Models have demonstrated significant improvements and have achieved saturated performance on many standard image classification benchmarks sparking discussion of whether these models have achieved parity with human object recognition ability and whether we can consider this problem solved. However, these models continue to fail in real-world applications and in un-human-like ways creating a disparity between the performance that benchmarks report and the performance that users experience. In this thesis, we investigate why standard datasets are misaligned with real-world performance by exploring image recognition difficulty as defined by human psychophysics. Using behavioral experiments with humans, we are able to identify images that humans struggle to recognize and investigate the prevalence of these images in datasets and their effect on model performance. To shed light on how humans are able to recognize these images, we conduct preliminary analysis with neuroimaging to take the first steps at identifying the neural signature of image difficulty.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-Driven Classification of Pharmaceutical and Biotechnology Companies</title>
<link href="https://hdl.handle.net/1721.1/156942" rel="alternate"/>
<author>
<name>Xu, Angelina</name>
</author>
<id>https://hdl.handle.net/1721.1/156942</id>
<updated>2024-09-25T03:33:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Data-Driven Classification of Pharmaceutical and Biotechnology Companies
Xu, Angelina
This study presents a novel approach for classifying biopharmaceutical companies from 2000 to 2023. We use fundamental financial data, 10-K filings, and company drug development data to develop this new classification scheme. Return correlations are used to measure the similarity of companies within a cluster, and our analysis demonstrates that this data-driven improves upon industry standards. Additionally, we evaluate the risk-return characteristics of the clusters developed from this classification scheme as consideration for investment opportunities in these industries.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing an eCommerce Pricing Model Using Rank Centrality</title>
<link href="https://hdl.handle.net/1721.1/156941" rel="alternate"/>
<author>
<name>Tong, Kevin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156941</id>
<updated>2024-09-25T03:17:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing an eCommerce Pricing Model Using Rank Centrality
Tong, Kevin C.
In recent years, eCommerce websites have become a popular alternative to traditional marketplaces, providing convenience to customers to order products from home and have them shipped. As a result, competition between sellers on the eCommerce websites has intensified in recent years, making a pricing strategy necessary to perform well in this marketplace.&#13;
&#13;
This paper attempts to model eCommerce competition between different sellers using the principle of Rank Centrality, and uses neural networks to accurately predict the winning seller on eCommerce websites, such as Amazon, based on factors including pricing, seller rating, and shipping guarantees for each seller. Using this prediction, a pricing strategy is formed to maximize sales volume and profits on these sites. This strategy is then implemented and evaluated as part of a 6-month internship with Spero Goods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Health Centered Drug Policy: An Analysis of Past and Developing Drug Policy</title>
<link href="https://hdl.handle.net/1721.1/156940" rel="alternate"/>
<author>
<name>Lewis, Benjamin B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156940</id>
<updated>2024-09-25T03:26:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Health Centered Drug Policy: An Analysis of Past and Developing Drug Policy
Lewis, Benjamin B.
Drug criminalization has disproportionately impacted communities of color and has insufficiently addressed substance use disorder and its associated risk of death through overdosing. Decriminalization has the potential to restore justice to communities decimated by traditional U.S. drug policy and could shift public focus towards medical approaches to treating addiction, however, inertia in drug policy persists, influenced by America’s popular political beliefs about illicit substances. A long-standing narrative in the United States views marijuana as a “gateway drug” that introduces users to harder substances, which then have adverse effects on their health and livelihood. As a result, many argue that policies which decriminalize marijuana are exacerbating the problem of drug addiction. Seemingly in line with this argument, overdose-related deaths–largely driven by increases in opioid consumption–have soared in recent years, and at the same time an increasing number of states have decriminalized marijuana. Little work, however, has examined the extent to which marijuana legalization has caused an increase in overdose deaths. Here, we address this question. To examine the causal effect of marijuana legalization on overdose deaths, we combine state-year level data on marijuana policy and overdose deaths with state-of-the-art techniques from the field of causal inference, namely Two-Way Fixed Effect Difference-in-Differences analysis with Synthetic Control. We include data from all states that enacted one of five marijuana legalization policies between 2010 and 2020. We estimate the causal effect of each policy separately for each state, and then use meta-analysis to calculate the overall effect of each policy intervention. We find that the passage of medical marijuana legalization laws, the opening of recreational dispensaries, and the implementation of Medical marijuana patient ID programs had no significant effect on annual state overdose death rates. The opening of medical marijuana dispensaries and the passage of recreational marijuana legalization laws also had no significant overall effect on overdose death rates, but the effect of these policies varied significantly across states such that there were significant increases in some states and significant decreases in others. Overall, these findings contradict the popular claim that marijuana decriminalization leads to increased use of more dangerous drugs (and thus overdose deaths) in most cases – and more generally questions the characterization of marijuana as a gateway drug.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Securing the Future: Critical Materials Policies for the US Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156939" rel="alternate"/>
<author>
<name>Concordel, Adrien</name>
</author>
<id>https://hdl.handle.net/1721.1/156939</id>
<updated>2024-09-25T03:41:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Securing the Future: Critical Materials Policies for the US Energy Transition
Concordel, Adrien
As the U.S. pushes forward industrial policies to support its energy transition with policies like the Inflation Reduction Act (IRA) to develop domestic green-tech supply chains, it overlooks the crucial need for a sustainable and secure supply of critical materials. This oversight threatens the success of the nation’s sustainable transition due to limited resilience and dependencies on geopolitically, environmentally, and socially sensitive international sourcing, particularly from China. This thesis examines the key considerations for the US to secure a sustainable supply of these materials, hypothesizing that a comprehensive policy framework integrating sustainable practices, domestic production incentives, and international cooperation can effectively reduce risks and externalities. Methods include empirical and case studies that highlight specific challenges such as permitting delays and dependency on foreign minerals, alongside economic models analyzing the impacts of these dependencies and market dynamics. Industry roundtables provide insights into prospective innovations and recent trends in the industry. Findings indicate significant market outlook uncertainty, critical dependence on imports, and significant limitations and inertia for new domestic resources development. The thesis proposes a policy framework aimed at addressing these deficiencies to support the U.S. in leading the global transition to sustainable technologies. Recommendations focus on enabling domestic production increase through better regulation and innovation, adopting sustainable practices, and diversifying supply chains to enhance resilience. This framework is crucial for policymakers, industry stakeholders, and academics involved in shaping a resilient U.S. energy strategy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structure Function Relation of Porous 2D Material via SGCMC Simulation and Statistical Models</title>
<link href="https://hdl.handle.net/1721.1/156938" rel="alternate"/>
<author>
<name>Wanichkul, Athikom</name>
</author>
<id>https://hdl.handle.net/1721.1/156938</id>
<updated>2024-09-25T03:02:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Structure Function Relation of Porous 2D Material via SGCMC Simulation and Statistical Models
Wanichkul, Athikom
To improve the design for structural resilience and reduced environmental impact, we need to make the structure function relation of concrete more accurate, accessible, and cost-effective. First, we formulate and implement the Semi-Grand Canonical Monte Carlo (SGCMC) simulation for fracture mechanics, which is a stochastic method that is capable of capturing both the initiation and the propagation of fractures in a medium. We then optimize the performance of our SGCMC simulation to reduce its time complexity from O(n²·³⁸) to O(n¹·²⁴) and its space complexity from O(n²) to O(n). The key step to performance optimization is exploiting the sparsity of the stiffness matrix. We also deploy our code to run multiple simulations concurrently on a super-computing infrastructure to achieve scalability. Then, we try to achieve an even more accessible and cost-effective structure function relation by applying statistical modeling to predict the strength of a two-dimensional porous material without running the simulation. We generate samples by randomly placing circular pores with radii drawn from a log-normal distribution until we reach the target porosity and run our SGCMC simulations on the generated samples to create a data set to train our statistical models. We defined several parameters, including the two-point correlation function, the multi-scale disorder index, the distribution of pore radius as recovered by Circle Hough Transformation (CHT), and the area moments of the pores to parameterize the porous geometry of the samples beyond the porosity, which is a well-known and very important parameter. We found our best model to be a Gradient Boosting Decision Trees (GBDT) regression model, whose out-of-sample R2 is 0.904, as opposed to the baseline model of linear regression with the porosity, whose out-of-sample R2 is 0.752.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Indexing Efficiency for Approximate Nearest Neighbor Search in High-dimensional Vector Databases</title>
<link href="https://hdl.handle.net/1721.1/156935" rel="alternate"/>
<author>
<name>Qin, Yuting</name>
</author>
<id>https://hdl.handle.net/1721.1/156935</id>
<updated>2024-09-25T03:03:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding Indexing Efficiency for Approximate Nearest Neighbor Search in High-dimensional Vector Databases
Qin, Yuting
Deep learning has transformed almost all types of data (e.g., images, videos, documents) into high-dimension vectors, which in turn forms Vector Databases as the data engines of various applications. As a result, queries on vector databases have become the cornerstone for many important online services, including search, eCommerce, and recommendation systems. In a vector database, the major operation is to search the &#119896; closest vectors to a given query vector, known as &#119896;-Nearest-Neighbor (&#119896;-NN) search. Due to massive data scale in practice, Approximate Nearest-Neighbor (ANN), which builds a search index offline to accelerate search online, is often used instead. One of the most promising ANN indexing approaches is the graphbased approach, which first constructs a proximity graph on the dataset, connecting pairs of vectors that are close to each other, then traverse the proximity graph for each query to find the closest vectors to a query vector. The search performance, in terms of the scope of traversal that leads to convergence, is highly dependent on the quality of the graph. There exist lots of prior work on improving the graph quality with various heuristics. However, no analysis or modeling work has been done to quatitatively evaluate the heuristics and their impact on the performance. Hence, it is unclear how to pick or combine the right heuristics to build a high-quality graph. This thesis aims to establish this connection to fill the gap. The key challenge in quantifying the heuristics is the complex tradeoff between the search accuracy and search speed, which makes it almost impossible to establish an analytical model. To this end, we propose to leverage machine learning as the modeling tool. We first build an unified framework to characterize various graph building heuristics, by decoupling the graph construction and search phases. We then extract graph attributes (e.g., diameter), and collect ground-truth performance data (e.g., search speed and accuracy) within our framework, across multiple datasets and graph configurations. Based on the collected data, we train a linear regression model to predict the search performance. We show experimental results on our model performance, and also discuss the implications on selecting heuristics that improve the quality of the indexing graphs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming the Expressivity-Efficiency Tradeoff in Program Induction</title>
<link href="https://hdl.handle.net/1721.1/156932" rel="alternate"/>
<author>
<name>Acquaviva, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/156932</id>
<updated>2024-09-25T04:03:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Overcoming the Expressivity-Efficiency Tradeoff in Program Induction
Acquaviva, Samuel
People are incredibly flexible and efficient inductive reasoners. On the other hand, current approaches in program synthesis show strong domain-specific performance, but are both less sample-efficient and less flexible. Large language models improve upon this sample-efficiency and domain-generality, but lack robustness and still fall far short of people and traditional approaches on difficult induction tasks. In this thesis, we propose two hypotheses for how people seemingly overcome this trade-off between flexibility and efficiency. In the first, we propose that people may operate over an incredibly vast language which is made tractable via a strong, bottom-up proposal model. In the second, we propose that, alternatively, people may relax the necessity of such a strong proposal model by learning task-specific reasoning languages through experience. We build models operationalizing both hypotheses and show that they can improve the generality and efficiency of previous models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An initial design procedure for the motion analysis of flexible marine risers</title>
<link href="https://hdl.handle.net/1721.1/156858" rel="alternate"/>
<author>
<name>Jones, Hobart Todd.</name>
</author>
<id>https://hdl.handle.net/1721.1/156858</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">An initial design procedure for the motion analysis of flexible marine risers
Jones, Hobart Todd.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1987; Bibliography: leaves 262-264.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic stability testing with a wind tunnel magnetic model suspension system</title>
<link href="https://hdl.handle.net/1721.1/156852" rel="alternate"/>
<author>
<name>Tilton, Edward Lee.</name>
</author>
<id>https://hdl.handle.net/1721.1/156852</id>
<updated>2025-10-31T20:12:36Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Dynamic stability testing with a wind tunnel magnetic model suspension system
Tilton, Edward Lee.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1963; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooperative research--improving university-industry joint efforts</title>
<link href="https://hdl.handle.net/1721.1/156850" rel="alternate"/>
<author>
<name>Jones, Ruth J.
            (Ruth Jiling)</name>
</author>
<id>https://hdl.handle.net/1721.1/156850</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1987-01-01T00:00:00Z</published>
<summary type="text">Cooperative research--improving university-industry joint efforts
Jones, Ruth J.
            (Ruth Jiling)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1987; Bibliography: leaves 69-70.
</summary>
<dc:date>1987-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal evaluation of selected ablative materials in transient, low-heat flux environments</title>
<link href="https://hdl.handle.net/1721.1/156849" rel="alternate"/>
<author>
<name>Marques, Joseph Peter.</name>
</author>
<id>https://hdl.handle.net/1721.1/156849</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">Thermal evaluation of selected ablative materials in transient, low-heat flux environments
Marques, Joseph Peter.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1983; "CSDL-T-809."; Includes bibliographical references.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An examination of surgical scheduling policies.</title>
<link href="https://hdl.handle.net/1721.1/156846" rel="alternate"/>
<author>
<name>Hill, Claire Louise.</name>
</author>
<id>https://hdl.handle.net/1721.1/156846</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">An examination of surgical scheduling policies.
Hill, Claire Louise.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gradability in Count Nouns: Categorizing and Counting Part and Whole Objects in Children and Adults</title>
<link href="https://hdl.handle.net/1721.1/156840" rel="alternate"/>
<author>
<name>Sanchez, Karissa</name>
</author>
<id>https://hdl.handle.net/1721.1/156840</id>
<updated>2024-09-17T03:06:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Gradability in Count Nouns: Categorizing and Counting Part and Whole Objects in Children and Adults
Sanchez, Karissa
Some of the first words that children can comprehend and produce are nouns like ball and fork. Despite this apparent early command, children deviate from adult-like behavior when categorizing and quantifying objects falling under noun descriptions. Even beyond four years of age, when they are asked to count the Xs given a set of objects that includes whole objects that fall under the noun and detached parts of objects that do, they have a tendency to count the individual partial objects as if they were wholes. Prior accounts attribute this difference to either a child's nascent numerical and quantificational abilities or to their semantic and pragmatic understanding of nominal label usage. These accounts are informed by experiments which varyingly probe categorization, counting, and quantification. However, no account can fully explain the data across all experiments, making it difficult to adjudicate between them. In this thesis, I propose a new approach to analyzing the deviation in child and adult behavior by considering how both nominal and quantificational abilities could influence it. We design a novel paradigm that examines the same children’s categorization of partial objects under noun labels and their numerical judgements about the items they had just categorized. This paradigm allows us to pinpoint where the cause of the deviation in child-like and adult-like behavior lies. Is it due to a difference in understanding nominal usage, their ability to quantify items, or both?  Ultimately, we find evidence that both nominal usage and quantificational abilities could be contributing to the deviation in behavior. We also suggest that in addition an overly flexible standard of application for count nouns, children's lack of granularity in numerical measurements could be causing them to count partial objects as wholes. For instance, children might be less adept than adults at accessing measurements between 0 and 1 such as half an X, causing them to count partial objects under a noun label as one such object.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Algorithmic Design and Optimization for Quantum Computation with a Qubit-Oscillator System</title>
<link href="https://hdl.handle.net/1721.1/156839" rel="alternate"/>
<author>
<name>Mintzer, Gabriel L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156839</id>
<updated>2024-09-17T03:19:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Algorithmic Design and Optimization for Quantum Computation with a Qubit-Oscillator System
Mintzer, Gabriel L.
Quantum computation has long been dominated by a digital approach using the qubit, which exists in a two-dimensional vector space, as its basic unit.  More recently, there has been increasing interest in an analog approach, which uses as its basic unit a qudit in an infinite-dimensional vector space.  Alongside these two approaches is a third less-studied approach, that of combining digital and analog quantum computation.  This approach is perhaps best exemplified by, and most researched via, the system of a qubit coupled to a quantum harmonic oscillator, which has been realized with many of the leading platforms for quantum computation.  In this thesis, we ask how machine learning and other high-level computational techniques can be employed in the design of applications of a qubit-oscillator system to implementing fundamental components of quantum technology.  In order to begin to answer this question and lay the groundwork for future investigation, both with this system and with others, we demonstrate the application of such high-level computational techniques toward addressing the problems of quantum compilation, quantum sensing, and quantum error-correction with the qubit-oscillator system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidating Targetable Genetic Vulnerabilities in Relapsed/Refractory Diffuse Large B-cell Lymphoma</title>
<link href="https://hdl.handle.net/1721.1/156838" rel="alternate"/>
<author>
<name>Li, Audrey</name>
</author>
<id>https://hdl.handle.net/1721.1/156838</id>
<updated>2024-09-17T03:45:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Elucidating Targetable Genetic Vulnerabilities in Relapsed/Refractory Diffuse Large B-cell Lymphoma
Li, Audrey
Diffuse large B-cell lymphoma (DLBCL), the most prevalent form of non-Hodgkin lymphoma is marked by significant heterogeneity in its morphology, genetic irregularities, and clinical behavior. Current prognostic tools, including the International Prognostic Index and cell-of-origin transcriptional classifications such as germinal center B-cell-like and activated B-cell-like, do not adequately reflect DLBCLs complex nature. Front-line standard of care treatment predominantly consists of a regimen with cyclophosphamide, doxorubicin, prednisone, rituximab, and vincristine (R-CHOP); however, the relapse rate remains high, underscoring the need for improved diagnostic and therapeutic methods. In this comprehensive analysis, we investigated the genetic substructure of DLBCL in both newly diagnosed and relapsed/refractory cases, focusing on genetic abnormalities pertinent to relapsed settings and the immune microenvironment’s influence on therapy response. Our f indings revealed significant enrichment of specific genetic clusters, notably clusters 2 and cluster 5, which are associated with an inferior prognosis and high relapse rates following R-CHOP therapy. These clusters were characterized by distinct genetic alterations, including prevalent mutations in TP53, BCL2, and MYD88. The results of this study suggest that integrating detailed genetic profiling into the clinical management of DLBCL could significantly refine therapeutic approaches, tailoring them to the unique genetic backdrop of each patient’s disease. This approach promises to enhance the precision of prognostic assessments and the efficacy of subsequent therapeutic interventions, paving the way for personalized medicine in the treatment of DLBCL.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Causal Inference and Attribute Prediction Through Visual Information</title>
<link href="https://hdl.handle.net/1721.1/156837" rel="alternate"/>
<author>
<name>Chau, Eileen</name>
</author>
<id>https://hdl.handle.net/1721.1/156837</id>
<updated>2024-09-17T03:18:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving Causal Inference and Attribute Prediction Through Visual Information
Chau, Eileen
Causal inference is an active area of research in computer science and statistics as it is used to understand casual conclusions that traditional statistics cannot. A naive way to conclude the cause of an outcome is by using correlations, but this is not always accurate because there may be other variables that indirectly affect an outcome. Causal inference aims to find the root cause by considering those variables called confounders. Frequently, confounding variables are attributes in existing data, but sometimes they can be missing from the existing data. In those cases, data analysts have to look for confounders from outside sources such as tables, knowledge graphs, and text. Our focus is to look for confounding variables from visual data such as videos and images. Discovering confounders from visual data is a challenge because videos and images are unstructured unlike tables and graphs. Thus, it is difficult to identify features and also extract them from visual data. Additionally, the identified and extracted features must be relevant to the casual question being studied. With the recent advancement in visual language models (VLMs) such as GPT-4V(ision), VLMs can provide a versatile solution to the confounder discovery and feature extraction problem when using visual data. This thesis proposal investigates confounder discovery, feature extraction, and casual inference from visual data by utilizing the power of VLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Distributed Transaction Processing System Using DARQ</title>
<link href="https://hdl.handle.net/1721.1/156836" rel="alternate"/>
<author>
<name>Zhu, Ophelia Min</name>
</author>
<id>https://hdl.handle.net/1721.1/156836</id>
<updated>2024-09-17T03:41:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building a Distributed Transaction Processing System Using DARQ
Zhu, Ophelia Min
Building distributed transaction processing systems in the context of cloud microservices poses challenges related to fault tolerance, resilience, and composability. Composable Resilient Steps (CReSt) and its implementation, Deduplicated Asynchronously Recoverable Queues (DARQ), provide an abstraction to address these challenges by separating application logic from resilience mechanisms. This thesis explores the performance and usability of DARQ through the development of a distributed transaction processing system. DARQ is evaluated by performance on the YCSB and TPCC benchmark and by the ease of programming with it. The abstraction of CReSt and DARQ, while requiring additional setup, simplifies the programming for fault-tolerant applications and provides performance optimizations out of the box compared to a standard baseline implementation, enabling a 6.89x speedup for TPCC. The abstraction reduced the amount of logic needed in components that required persistence, namely the write-ahead log and two-phase commit protocol. As complex systems compose on one another, DARQ can be a useful abstraction for developers to simplify their application logic whilst providing fault-tolerance and performance optimizations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Optimized Architected Reef Design in Random Oscillatory Motion for Maximized Wave Energy Dissipation and Coastal Preservation</title>
<link href="https://hdl.handle.net/1721.1/156835" rel="alternate"/>
<author>
<name>Sinha, Anjali</name>
</author>
<id>https://hdl.handle.net/1721.1/156835</id>
<updated>2024-09-17T03:58:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluation of Optimized Architected Reef Design in Random Oscillatory Motion for Maximized Wave Energy Dissipation and Coastal Preservation
Sinha, Anjali
The mitigation of exacerbated coastal erosion and reef degradation warrants thorough examination and enhancement of existing coastal defense strategies. Severe threats to ecosystems, communities, and infrastructure from climate change, including rising sea levels and intensified weather events, necessitate the development of new technologies for protection and damage prevention. The focus of this research is to inform optimization efforts for the design of an architected reef structure aimed at maximizing wave energy dissipation when placed under various real-world environmental conditions. By testing reef structures in sea storm conditions with random oscillatory motion, this study aims to assess the effectiveness of the architected reefs in mitigating the adverse effects of wave energy. Validating the performance of reef structures in random wave motion, as compared to regular, sinusoidal motion, will improve testing efficiency, advancing the development of sustainable and resilient solutions for future coastal preservation efforts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System to Exploit Symmetry in Common Tensor Kernels</title>
<link href="https://hdl.handle.net/1721.1/156832" rel="alternate"/>
<author>
<name>Patel, Radha</name>
</author>
<id>https://hdl.handle.net/1721.1/156832</id>
<updated>2024-09-17T03:10:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A System to Exploit Symmetry in Common Tensor Kernels
Patel, Radha
Symmetric tensors arise naturally in many domains including linear algebra, statistics, physics, chemistry, and graph theory. Symmetry arises through both mathematical properties and scientific phenomena. Taking advantage of symmetry in matrices saves a factor of two, but taking advantage of symmetry in a tensor of order n can save a factor of n! in memory accesses and operations. However, implementing this symmetry by hand significantly increases the complexity; for instance, leveraging symmetry in 2D BLAS nearly doubles the implementation burden, and this burden escalates further in the case of higher-dimensional tensors. Existing compilers to compute those kernels either do not take advantage of symmetry or do not take advantage of it to the extent possible. My thesis will identify and categorize methods to exploit symmetry in common and uncommon tensor kernels. We will depict a methodology to systematically generate and optimize symmetric code and will present a compiler in Julia that automates this process. Our symmetric implementation demonstrates significant speedups ranging from 1.36x for SSYMV to 7.95x for a 4-dimensional MTTKRP over the naive implementation of these kernels.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing SCRAM: Privacy-Centric Approaches in Cyber Risk Measurement</title>
<link href="https://hdl.handle.net/1721.1/156831" rel="alternate"/>
<author>
<name>Magrefty, David S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156831</id>
<updated>2024-09-17T03:29:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Advancing SCRAM: Privacy-Centric Approaches in Cyber Risk Measurement
Magrefty, David S.
The Secure Cyber Risk Aggregation and Measurement (SCRAM) framework allows multiple parties to compute aggregate cyber-risk measurements without the need to disclose publicly any information about their identity and their personal data. The framework, through the use of Multi-Party Computation (MPC) and Homomorphic Encryption (HE), guarantees each party that their participation in the computation is confidential and that the aggregated results will not be decrypted without their authorization [1]. However, the system fails to guarantee what the output of the aggregated computations reveals about their identity, their security posture, and their losses.&#13;
&#13;
In this work, we tackle the challenging problem of preserving privacy in small datasets while maximizing utility, a critical issue in the context of the SCRAM framework. We first construct a linear programming problem that demonstrates how the aggregate outputs of SCRAM do not provide adequate privacy, revealing sensitive information about individual parties. Then, we establish new privacy guarantees for the framework based on the concepts of Predicate Singling Out (PSO) and Differential Privacy (DP). These guarantees aim to protect the identity and data of the participating parties while still allowing for meaningful aggregate measurements.&#13;
&#13;
We then demonstrate the inadequacy of existing privacy solutions for small datasets and propose two novel techniques specifically designed for small datasets: integer-binary randomized response and clustering-based output perturbation. The integer-binary randomized response transforms integer inputs into binary questions, enabling the application of randomized response techniques while minimizing the impact on data utility. The clustering-based approach aggregates similar values into clusters and reports summary statistics, effectively obfuscating individual data points while preserving the overall distribution and relative magnitudes. These techniques offer a balance between privacy and utility, demonstrating the feasibility of privacy-preserving computation on small datasets.&#13;
&#13;
Our work highlights the limitations of existing privacy solutions for small datasets and the necessity of developing specialized techniques to address this challenge. The proposed methods not only enhance the privacy guarantees of the SCRAM framework but also contribute to the broader field of privacy-preserving computation, providing a foundation for future research and applications involving sensitive data aggregation and analysis in small dataset scenarios.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Emotion Vectorization Algorithm (EVA): Automated Music&#13;
Generation from Imaging and Emotion Inputs</title>
<link href="https://hdl.handle.net/1721.1/156829" rel="alternate"/>
<author>
<name>Liu, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/156829</id>
<updated>2024-09-17T03:53:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Emotion Vectorization Algorithm (EVA): Automated Music&#13;
Generation from Imaging and Emotion Inputs
Liu, Dylan
Generative AI tools for the creative arts have become increasingly popular over the past few years. Several well-known models, such as ChatGPT and DALL-E, can even produce writing and artwork comparable to those created by human professionals. Thus, it's no surprise that many technology firms, such as OpenAI and Google, have trained models that can create music as well. These state-of-the-art models usually take in an artist or genre, and they output a song corresponding to the received inputs. However, none of these models are designed to generate music according to an \emph{emotional} input, nor are they able to generate their own styles of music (i.e. they are all trained on well-known works).&#13;
&#13;
Because music is designed to target and evoke specific feelings within the listener, we aim to produce a tool that accounts for this emotional aspect. To this end we create EVA, a new type of generative music model. EVA is the first model takes in a quantitative representation of an emotion as input and returns an instrumentalized musical performance that evokes such an emotion as output. Furthermore, without the reliance on past works of well-known composers for training data, EVA produces a unique style of music that is dissimilar to any particular artists.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Engineering of Modular Symbols</title>
<link href="https://hdl.handle.net/1721.1/156828" rel="alternate"/>
<author>
<name>Boonsiriseth, Krit</name>
</author>
<id>https://hdl.handle.net/1721.1/156828</id>
<updated>2024-09-17T03:08:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Performance Engineering of Modular Symbols
Boonsiriseth, Krit
We present a new program MFSplit which computes information about newform subspaces for modular forms of weight 2 and trivial character. Modular forms are certain functions in mathematics that appear in many different subfields of mathematics, including number theory and complex analysis; newform subspaces are spaces spanned by a special type of modular forms and are, in some sense, building blocks of spaces of modular forms. Our program MFSplit is based on modular symbols, which is a formalism commonly used to compute modular forms. Existing computer algebra systems such as Sage and Magma include implementations of modular symbols. Our implementation applies the principles of performance engineering to this computational number theory problem, and MFSplit is at least 3 times faster than existing implementations. Consequently, we were able to compute information about newform subspaces for level N ≤ 50000, extending previous efforts that computed this information up to N ≤ 16000. Based on this computation, we analyze the performance characteristics of our program and generate more data related to certain conjectures in mathematics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer Vision Techniques for Drill Bit Identification and Mechanical Wear Detection</title>
<link href="https://hdl.handle.net/1721.1/156827" rel="alternate"/>
<author>
<name>Darby, Brady J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156827</id>
<updated>2024-09-17T03:50:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Computer Vision Techniques for Drill Bit Identification and Mechanical Wear Detection
Darby, Brady J.
Developments of computer vision techniques in the past decade have rapidly accumulated and enabled the application of vision systems to use cases that were once out of reach. In conjunction with standard image processing techniques, deep learning models for vision tasks have received increasing attention, and they both see considerable utility in space exploration. Specifically, real-time obstacle detection and motion planning require advanced vision logic. However, retroactive data analysis is an area with less emphasis but promising application for computer vision. This thesis project explores how both image processing and deep learning-based computer vision methods can be leveraged to analyze drill bits on board the Mars 2020 Perseverance Rover, a Jet Propulsion Laboratory (JPL) mission. The effectiveness of thresholding and segmentation on two critical tasks, drill bit identification and mechanical wear detection, is demonstrated. Then, transfer learning of convolutional neural networks (CNNs) is applied to the same tasks, allowing comparison of results. This thesis also explores a means of presenting processed image outputs to non-technical operators in order to assist manual analysis of drill bit wear state.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Expectation-based comprehension of linguistic input: facilitation from visual context</title>
<link href="https://hdl.handle.net/1721.1/156826" rel="alternate"/>
<author>
<name>Pushpita, Subha Nawer</name>
</author>
<id>https://hdl.handle.net/1721.1/156826</id>
<updated>2024-09-17T03:03:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Expectation-based comprehension of linguistic input: facilitation from visual context
Pushpita, Subha Nawer
Context fundamentally shapes real-time human language processing, creating linguistic expectations that drive efficient processing and accurate disambiguation (Kuperberg and Jaeger, 2016). In naturalistic language understanding, the visual scene often provides crucial context (Ferreira et al., 2013; Huettig et al., 2011). We know that visual context guides spoken word recognition (Allopenna et al., 1998), syntactic disambiguation (Tanenhaus et al., 1995), and prediction (Altmann and Kamide, 1999), but much about how visual context shapes real-time language comprehension remains unknown. In this project, we investigate how visual information penetrates the language processing system and real-time language understanding. Here we show that relevant visual context significantly facilitates reading comprehension, with the amount of facilitation modulated by a word’s degree of grounding in that visual context or image in our case. Our results also demonstrate that the facilitation is largely mediated by the effect of multimodal surprisal(the relative entropy induced by the word between the distributions over interpretations of the previous words in the sentence and the image). We also found that the errors that people are prone to make in reading comprehension tasks can be largely predicted by the amount of multimodal surprisal. The results also highlight the strong correlation between a word’s degree of grounding and reduction of surprisal for the presence of an image. Our work offers new possibilities for how multimodal large language models may be used in psycholinguistic research to investigate how visual context affects language processing. This work will also pioneer questions about how information processed in different modalities such as audio, video, or structured visuals like graphs and diagrams shape our upcoming linguistic comprehension or even language generation, providing fundamental theoretical insights into the understanding of the way we use language to navigate in a complex world.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Further Hardness Results for Stephen’s Sausage Roll</title>
<link href="https://hdl.handle.net/1721.1/156825" rel="alternate"/>
<author>
<name>Liu, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/156825</id>
<updated>2024-09-17T04:00:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Further Hardness Results for Stephen’s Sausage Roll
Liu, Jason
Stephen’s Sausage Roll is a relatively unstudied puzzle game with a fascinating set of mechanics for computational hardness problems. The only past results are from a class project in MIT’s 6.5440 class of Fall 2023, which only dealt with two specific subsets of the mechanics restricted to two-dimensional forms of the game [1]. This project presents a more complete characterization of problems based off of Stephen’s Sausage Roll, and provides solutions for a significant portion. In particular, both variants of Stephen’s Sausage Roll considered in prior work can be solved by one of these results.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Augmenting Inputs using a Novel Figure-to-Text Pipeline to Assist Visual Language Models in Answering Scientific Domain Queries</title>
<link href="https://hdl.handle.net/1721.1/156824" rel="alternate"/>
<author>
<name>Gupta, Sejal</name>
</author>
<id>https://hdl.handle.net/1721.1/156824</id>
<updated>2024-09-17T03:42:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Augmenting Inputs using a Novel Figure-to-Text Pipeline to Assist Visual Language Models in Answering Scientific Domain Queries
Gupta, Sejal
Recent advancements in visual language models (VLMs) have transformed the way we interpret and interact with digital imagery, bridging the gap between visual and textual data. However, these models, like Bard, GPT4-v, and LLava, often struggle with specialized fields, particularly when processing scientific imagery such as plots and graphs in scientific literature.&#13;
&#13;
In this thesis, we discuss the development of a pioneering reconstruction pipeline to extract metadata, regenerate plot data, and filter out extraneous noise like legends from plot images. Ultimately, the collected information is presented to the VLM in structured, textual manner to assist in answering domain specific queries. The efficacy of this pipeline is evaluated using a novel dataset comprised of scientific plots extracted from battery domain literature, alongside the existing benchmark datasets including PlotQA and ChartQA. Results about the component accuracy, task accuracy, and question-answering with augmented inputs to a VLM show promise in the future capabilities of this work.&#13;
&#13;
By assisting VLMs with scientific imagery, we aim to not only enhance the capabilities of VLMs in specialized scientific areas but also to transform the performance of VLMs in domain specific areas as a whole. This thesis provides a detailed overview of the work, encompassing a literature review, methodology, results, and recommendations for future work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adapting Transformer Encoder Architecture for Continuous Weather Datasets with Applications in Agriculture, Epidemiology and Climate Science</title>
<link href="https://hdl.handle.net/1721.1/156822" rel="alternate"/>
<author>
<name>Hasan, Adib</name>
</author>
<id>https://hdl.handle.net/1721.1/156822</id>
<updated>2024-09-17T03:17:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adapting Transformer Encoder Architecture for Continuous Weather Datasets with Applications in Agriculture, Epidemiology and Climate Science
Hasan, Adib
This work introduces WeatherFormer, a transformer encoder-based model designed to robustly represent weather data from minimal observations. It addresses the challenge of modeling complex weather dynamics from small datasets, which is a bottleneck for many prediction tasks in agriculture, epidemiology, and climate science. Leveraging a novel pretraining dataset composed of 39 years of satellite measurements across the Americas, WeatherFormer achieves state-of-the-art performance in crop yield prediction and influenza forecasting. Technical innovations include a unique spatiotemporal encoding that captures geographical, annual, and seasonal variations, input scalers to adapt transformer architecture to continuous weather data, and a pretraining strategy to learn representations robust to missing weather features. This thesis for the first time demonstrates the effectiveness of pretraining large transformer encoder models for weather-dependent applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benthic: Designing Relational Traversal Structures to Enhance Diagram Accessibility</title>
<link href="https://hdl.handle.net/1721.1/156821" rel="alternate"/>
<author>
<name>Mei, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/156821</id>
<updated>2024-09-17T03:52:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Benthic: Designing Relational Traversal Structures to Enhance Diagram Accessibility
Mei, Catherine
Diagrams are data structures for problem-solving and communication because they allow users to formalize and analyze complex concepts through spatial relations. However, their visual nature presents significant accessibility challenges for blind and low-vision users who rely on screen readers. Existing methods for making diagrams accessible often fall short, providing only superficial overviews and lacking detailed, navigable structures. This paper introduces Benthic, a system for generating intermediate representations and depicting relational information in diagrams. Benthic provides an interface that allows screen reader users to navigate the diagram data structure. Benthic uses a hypergraph traversal structure, where diagram nodes are grouped by hyperedges that represent diagram relations. These relations are presented in the screen reader interface according to their priority (or visual salience), allowing screen reader users to traverse the information similarly to how sighted users might view the diagram. Additionally, users can explore diagrams at various levels of detail by choosing to navigate high-level relations or more detailed relations based on their needs. We evaluate Benthic’s effectiveness through three comparative case studies with existing diagram accessibility systems. Benthic aims to create a design space of traversal structures that will allow blind and low-vision users to leverage the same affordances available to sighted users, enabling intuitive interaction and comprehensive understanding of diagrams.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Soft-Rigid Robots: Investigating Series and Parallel Configurations</title>
<link href="https://hdl.handle.net/1721.1/156820" rel="alternate"/>
<author>
<name>Sologuren, Emily R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156820</id>
<updated>2024-09-17T03:51:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hybrid Soft-Rigid Robots: Investigating Series and Parallel Configurations
Sologuren, Emily R.
The diverse set of traits that soft-rigid robots possess have the potential to be applied towards a multitude of applications that require both strength and flexibility. This thesis looks at two kinds of soft-rigid robotic systems: the first is a series assembly of soft-rigid modules with stiffness modulation to form a soft-rigid robotic arm, and the second system is a parallel assembly of rigid bones casted into silicone to form a passive soft-rigid flipper for a robotic sea turtle. We first introduce a new class of soft-rigid modules that can modulate their stiffness on a continuum through tendon-driven actuation and the integration of "soft" and "rigid" components. Their serial assembly form a self-standing, soft-rigid robotic arm (SRRA). When coupled with an adapted soft PD+ controller, we generate trajectories that demonstrate the manipulator’s ability to deform for maneuvering tasks and stiffen for load-bearing tasks. The robotic sea turtle’s parallel, soft-rigid flippers emulate those of its animal counterpart. To leverage this structure for underwater locomotion, we look at a CPG-coupled reinforcement learning framework to optimize for a forward swimming gait.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Trade Off Performance and Safety in Mixed Autonomy Traffic</title>
<link href="https://hdl.handle.net/1721.1/156819" rel="alternate"/>
<author>
<name>Ding, Jessica H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156819</id>
<updated>2024-09-17T03:02:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning to Trade Off Performance and Safety in Mixed Autonomy Traffic
Ding, Jessica H.
With the advent of autonomous vehicles (AVs), and with the slow but steady consumer adoption of AVs on road networks, there is a newfound need to study the interactions between efficient traffic flow and driving safety in mixed autonomy traffic. Extending from reinforcement learning methods in robotic control methods and from learning methods for location-based actuators like traffic lights, this thesis considers control strategies afforded by individual AVs, which have recently seen potential for direct optimization of singular system objectives, such as traffic smoothing and emission reduction, and introduces a reinforcement learning-based methodological framework to facilitate a study of the trade offs between performance and safety at a fleet level. This investigation automatically produces Pareto frontier curves for four diverse traffic scenarios based on established mixed traffic benchmarks. The results of this study will inform decision-makers regarding inherent trade-offs in traffic control systems, and this framework can be extended to study arbitrary objectives in complex control systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modulated Frequency Multiplier Inverter</title>
<link href="https://hdl.handle.net/1721.1/156818" rel="alternate"/>
<author>
<name>Coston, Sarah M.</name>
</author>
<id>https://hdl.handle.net/1721.1/156818</id>
<updated>2024-09-17T03:38:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modulated Frequency Multiplier Inverter
Coston, Sarah M.
Many industrial applications such as plasma generation and wireless power transfer require high frequency power inverters (or rf power amplifiers) that are able to output a wide power range despite highly variable load reactances, while also maintaining high efficiency. Previous approaches to this problem, such as switched-mode inverters combined with tunable matching networks provide adequate, albeit bulky, costly, and complex solutions at lower HF frequencies, while at higher frequencies inefficient linear amplifiers dominate. This thesis introduces an efficient inverter (or switched-mode power amplifier) approach that can provide efficient wide-power-range control into a variable load, while being scalable to increased output frequencies compared to conventional designs. We introduce a wide-range power amplifier that uses frequency control to manage reactive load variations, and phase modulation to modulate output power, and frequency multiplication to achieve high output frequency, all while maintaining soft switching. The proposed thesis provides a preliminary development of this modulated frequency multiplier inverter, analyzing and demonstrating it functionality and effectiveness through simulation, showing its ability to achieve high output frequencies, manage wide load reactances, control power over a wide range, and maintain a high efficiency.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recognizing Brain Regions in 2D Images from Brain Tissue</title>
<link href="https://hdl.handle.net/1721.1/156817" rel="alternate"/>
<author>
<name>Lohawala, Sabeen Imtiyaz</name>
</author>
<id>https://hdl.handle.net/1721.1/156817</id>
<updated>2024-09-17T03:45:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Recognizing Brain Regions in 2D Images from Brain Tissue
Lohawala, Sabeen Imtiyaz
Often, the first step in neuroimaging research is understanding which anatomical structures are present in an image. Structural MRI provides a clear, high-resolution visualization of the anatomy of the brain, capturing physical characteristics like the size and shape of different regions of the brain or the presence of abnormalities such as tumors. Whereas sMRI are more commonly taken in vivo, the neuropathology of many neurodegenerative disorders, like Alzheimer’s, requires analysis of the brain post-mortem through techniques like brain dissection, necessitating the use of other imaging modalities. Various tools and deep learning models have been developed to automatically identify different anatomical structures in 3D MRI volumes. However, the only method that exists to segment the anatomical structures in 2D brain slices, whether they be 2D slices extracted from an MRI or photographs of slices from a physically dissected brain, is manually labeling by a trained neuroanatomist, which is costly, resource-intensive, and time-consuming. In this project, we develop a new deep learning method to automatically segment 50 different regions in 2D photographs of the brain. Because a supervised image and segmentation map dataset does not exist for the photographs, we train the state-of-the-art SegFormer model on a supervised dataset of 2D MRI slices. We employ multiple data augmentation techniques to increase the variability of the training data to more closely resemble the variability seen in brain photographs, so that the model is robust enough to segment the anatomical regions in brain photographs. In this project, the SegFormer model achieved test dice scores between 0.6-0.75 on the segmentation of 50 different anatomical regions in 2D MRI slices, depending on which augmentations were incorporated during training. Additionally, the project demonstrated that incorporating complex augmentations that forced the model to learn the segmentation task with reduced contextual information as well as those that decoupled the tissue and background by manipulating them independently helped improve the robustness of the model, allowing it to better segment 2D photographs of the brain. Although there is much room for improvement, this project provides a set of techniques that can be extended to further improve the model’s robustness so that it can be applied to other imaging modalities as well in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Genomic Language Models for Protein Function and Property Prediction</title>
<link href="https://hdl.handle.net/1721.1/156816" rel="alternate"/>
<author>
<name>Boshar, Sam T.</name>
</author>
<id>https://hdl.handle.net/1721.1/156816</id>
<updated>2024-09-17T03:48:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Genomic Language Models for Protein Function and Property Prediction
Boshar, Sam T.
In the field of natural language processing (NLP), large language models (LLMs) trained on enormous corpora of unlabeled sequence data have demonstrated state-of-the-art performance on a variety of downstream tasks. This approach is appealing because one model can be easily adapted to do well in many modalities, rather than requiring many specialized models. This same architecture has found great success modeling biological data, including protein, mRNA and genomic sequences. Representations from biological language models have also outperformed highly specialized models, especially in data-scarce scenarios. How- ever, since the genome contains all of the information encoding proteins, genomic language model (gLMs) have the potential to model DNA, RNA and proteins. In spite of this, the performance of gLMs on proteins is largely unknown due to the lack of datasets pairing proteins with their true coding sequences. In this work, we curate five such coding sequence datasets and use them to study gLMs and protein language model (pLM) performance on protein function and property prediction. We show that gLMs are competitive and even outperform their pLMs counterparts on some tasks and that they perform best using the curated true coding sequences over alternative codon sampling strategies. We perform a series of experiments to find interpretable explanations for gLM performance, and investigate architecture changes to address their shortcomings and improve the ability of gLM to represent proteins. We found that a joint genomic-proteomic architecture outperforms each individual approach, showing that they capture different, but complementary sequence representations. We identify examples of such distinct representations in a detailed analysis of their respective embedding spaces. In studying the application of gLMs to proteomics, we look to encourage further research into a unified and synergistic approach to many biological modalities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Intermediate Representation for Expressing and Optimizing Computations in Lattice Quantum Chromodynamics</title>
<link href="https://hdl.handle.net/1721.1/156815" rel="alternate"/>
<author>
<name>Sollee III, Richard P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156815</id>
<updated>2024-09-17T03:22:34Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Intermediate Representation for Expressing and Optimizing Computations in Lattice Quantum Chromodynamics
Sollee III, Richard P.
The field of Lattice Quantum Chromodynamics faces massive scaling problems because of the large iteration spaces of the sums required which scale with the factorial of the number of atoms represented. The LQCD IR and rewrite system from this thesis allows tackling these scaling problems quicker and more effectively. The IR allows representing both mathematical concepts such as products and sums as well as algorithmic concepts such as precomputations. Our system requires minimal code to initialize the naive algorithm and apply effective rewrites to increase performance. This development time speedup allows trying various approaches with ease. The rewrite system allows correctness to be maintained at each step while being able to drastically change the algorithmic approach in search of better asymptotic bounds. Our approaches lead to up to 5x speedups and at worse 2x slowdowns for our most important problem, but with a better development cycle, requiring only 100s of SLOC compared to 1000s of SLOC.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Video Games for Empathy and Understanding Towards Human Migration</title>
<link href="https://hdl.handle.net/1721.1/156814" rel="alternate"/>
<author>
<name>Casillas, Enrique</name>
</author>
<id>https://hdl.handle.net/1721.1/156814</id>
<updated>2024-09-17T03:38:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Video Games for Empathy and Understanding Towards Human Migration
Casillas, Enrique
Video games have recently started playing a more important role in education, though there is limited research on how they can be used to generate empathy and understanding towards their subject matters. To address this limitation, we present Vida Migrante, an online interactive simulation game about the struggles of Venezuelan migrants living in Ecuador, and analyze whether or not the game can foster empathy and understanding towards the migrant experience. This study uniquely looks at how the game can communicate the findings from real migrant data in such a way that users can empathize with them. A set of 52 students at the Massachusetts Institute of Technology were surveyed and asked a series of Likert-style and open-ended questions to determine whether or not this game generated empathy and understanding towards the topic. An in-depth quantitative and qualitative analysis reveals that although respondents already had high levels of empathy and understanding, the game was able to increase those levels rather significantly. This work shows that video games like these can be used not only to increase familiarity and understanding of a humanitarian issue, but also empathy towards the data and the presented human experiences. This paper lastly contributes a discussion of the specific features of this game that allows empathy generation to occur, which may help motivate future work to create effective games that allow its players to empathize with important issues in today’s technology driven world.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Policy-Based Access Control in Federated Clinical Question Answering</title>
<link href="https://hdl.handle.net/1721.1/156813" rel="alternate"/>
<author>
<name>Chen, Alice</name>
</author>
<id>https://hdl.handle.net/1721.1/156813</id>
<updated>2024-09-17T03:44:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Policy-Based Access Control in Federated Clinical Question Answering
Chen, Alice
Retrieval augmented generation (RAG) has recently expanded large language model versatility in answering domain-specific questions using dynamic external knowledge bases, particularly demonstrating promise in assisting clinical settings. However, due to its sensitive nature, patient medical data often requires retrieval to be federated across a decentralized network of hospital institutions, each maintaining internal databases and access control policies. Applying standard RAG to clinical question-answering tasks is complicated by the lack of an interface for hospital resource owners to regulate and restrict access to sensitive clinical documents during retrieval, which is essential for model feasibility in practice. We propose to leverage federated RAG retrieval for clinical trends inference across distributed medical records while adding authorization security mechanisms during retrieval to guarantee security of patient data. We propose (i) user identity authentication administered through a trusted federation of per-hospital OpenID Connect servers, (ii) a framework for integrating policy-based access control (PBAC) security mechanisms at flexible granularity into a federated RAG system to restrict medical data access based on user role attributes, and (iii) ClinicalTrendQA, a novel dataset to evaluate model performance for synthesizing clinical trends grounded on decentralized patient EHR information. To facilitate evaluation of our authorization PBAC framework on protecting information leakage during retrieval, we additionally present a federated 3-hospital case study and demonstrate that the same ClinicalTrendQA query under different user profiles holding varying degrees of access privileges observes the expected EHR information reduction. We also analyze metrics concerning the impact of this retrieval loss on end-to-end response quality against federated insecure and centralized RAG baselines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager</title>
<link href="https://hdl.handle.net/1721.1/156812" rel="alternate"/>
<author>
<name>Gerszberg, Nina R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156812</id>
<updated>2024-09-17T03:39:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantifying Gender Bias in Large Language Models: When ChatGPT Becomes a Hiring Manager
Gerszberg, Nina R.
The growing importance of large language models (LLMs) in daily life has heightened awareness and concerns about the fact that LLMs exhibit many of the same biases as their creators. In the context of hiring decisions, we quantify the degree to which LLMs perpetuate biases originating from their training data and investigate prompt engineering as a bias-mitigation technique. Our findings suggest that for a given resumé, an LLM is more likely to hire a candidate and perceive them as more qualified if the candidate is female, but still recommends lower pay relative to male candidates.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Compressive Enumeration to Solve Algorithmic Tasks With Language Models</title>
<link href="https://hdl.handle.net/1721.1/156810" rel="alternate"/>
<author>
<name>Wang, Annie</name>
</author>
<id>https://hdl.handle.net/1721.1/156810</id>
<updated>2024-09-17T03:52:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Compressive Enumeration to Solve Algorithmic Tasks With Language Models
Wang, Annie
Large language models are useful tools for generating and synthesizing short code snippets that solve straightforward programming problems. However, their performance on more advanced code generation tasks remains limited, due to the complex algorithmic nature of these tasks. Yet, large language models are often capable of crafting nearly-correct answers to such questions; model-generated responses are prone to small errors that may render an otherwise-correct program incorrect. To address this issue, we investigate whether large language models can be combined with enumerative program synthesis techniques to build solutions to difficult algorithmic problems. This thesis presents and evaluates compressive enumeration as a strategy for improving large language model performance on code generation tasks. Given a question q and a corpus P of model-generated responses to q, compressive enumeration isolates shared code components within P; combining these components in novel ways may make it possible to generate a new solution to q. Experimentation with the Stitch library learning algorithm shows that compressive enumeration is able to generate a working solution for a small number of questions. However, its best performance is typically attained on problems that are already solvable by current large language models. This suggests that compressive enumeration has limited practical value as a code generation strategy; however, future improvements to the technique may make it more widely applicable.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Optimality of Several Algorithms on Polynomial Regression of Empicial Bayes Poisson Model</title>
<link href="https://hdl.handle.net/1721.1/156808" rel="alternate"/>
<author>
<name>Kang, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/156808</id>
<updated>2024-09-17T03:01:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On the Optimality of Several Algorithms on Polynomial Regression of Empicial Bayes Poisson Model
Kang, Benjamin
The empirical Bayes estimator for the Poisson mixture model in [1], [2] has been an important problem studied for the past 70 years. In this thesis, we investigate extensions of this problem to estimating polynomial functions of the Poisson parameter rather than just the parameter itself. We generalize three different algorithms for estimation, specifically the Robbins estimator from [2], the NPMLE method from [3], and the ERM method from [4]. For each of these algorithms, we prove upper bounds on the minimax regret. We also prove a general lower bound that applies to any estimation algorithm for this setup. In addition to the theoretical bounds, we empirically simulate the performance of all three algorithms in relation to both the number of sample and the degree of the polynomial function we estimate.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large-scale Trends in Vision Systems: Novel Methods for Identifiability</title>
<link href="https://hdl.handle.net/1721.1/156807" rel="alternate"/>
<author>
<name>Yang, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/156807</id>
<updated>2024-09-17T03:21:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Large-scale Trends in Vision Systems: Novel Methods for Identifiability
Yang, Helen
While the analogy between artificial neural networks (ANNs) and the brain have been well validated in past work, one question without a clear answer is—what causes an ANN to be more or less brain-like? A better understanding of this may lead to the discovery and implementation of more performant and human-like AI systems. However, despite ANNs having been proposed as models of primate visual systems, the success in predicting both neural and behavioral responses of primates by ANNs has not been without contention. Increasing architectural and dataset sizes bring forth concerns of black boxes (artificial systems) explaining other black boxes (human intelligence), leading to our level of understanding of the relationship between artificial and biological visual systems hitting a wall. In addition, there is increasing empirical evidence that the representations learned by artificial vision systems are convergent: artificial vision systems trained on large datasets tend to learn similar representations despite having numerous differences in architecture and training. This lack of identifiability presents a challenge to comparison pipelines commonly used to validate artificial vision systems as models of biological vision—if two artificial vision systems with different architectures have convergent representations, we are limited in our ability to reason about the structural properties of an individual artificial vision system and determine which system provides a better model of the brain. In light of these issues, we provide an analysis of current frameworks for measuring artificial and biological visual system similarity and propose a novel approach toward improving identifiability between artificial vision systems via contrastive stimuli. We show that our approach offers better identifiability between artificial vision systems compared to standard benchmarks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Smarter Agents for Agent-Based Models</title>
<link href="https://hdl.handle.net/1721.1/156806" rel="alternate"/>
<author>
<name>Kuru, Nurullah Giray</name>
</author>
<id>https://hdl.handle.net/1721.1/156806</id>
<updated>2024-09-17T03:35:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Smarter Agents for Agent-Based Models
Kuru, Nurullah Giray
Agent-based models (ABMs) are powerful tools for decision-making due to their ability to simulate systems with individual-level granularity. Recent advances have mitigated the computational costs of scaling ABMs to real-world population sizes; however, the potential of ABMs is also constrained by the quality of the underlying data and feedback loops. We introduce two approaches to improving data quality in ABMs. First, we incorporate LLM peers in ABM simulations to guide agent decision-making and thought generation, leveraging the world model learned by LLMs. We analyze both proprietary and open-source LLMs for suitability in ABM use, and find GPT-3.5 to be a strong candidate for distinguishing between agent characteristics and producing plausible isolation decisions in an epidemic. We introduce an effective and scalable system for using LLMs in ABMs by characterizing agents using a small set of characteristics and using LLM peers to guide agent groups. We conduct experiments in a synthetic replica of the Astoria neighborhood of New York City and show that this system achieves better calibration and enables more detailed analysis. Second, we propose privacy-preserving ABMs that can integrate real agents into ABM simulations in a distributed system using cryptographic protocols. We describe algorithms for running simulations, calibration, and analysis of ABMs, and provide a proof of concept. This approach enables adding real human feedback into ABMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inverse Constitutional AI</title>
<link href="https://hdl.handle.net/1721.1/156804" rel="alternate"/>
<author>
<name>Kostolansky, Timothy H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156804</id>
<updated>2024-09-17T03:47:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inverse Constitutional AI
Kostolansky, Timothy H.
The alignment of large language models (LLMs) to human values becomes more and more pressing as their scale and capabilities have grown. One important feature of alignment is understanding the preference datasets that are used to finetune LLMs. Inverse Constitutional AI (ICAI) is presented as a novel interpretability framework to discover the principles underlying preference datasets. Motivated by the Constitutional AI training paradigm of instilling principles in models, ICAI aims to extract a succinct "constitution" of natural language principles from data. This thesis contributes an initial attempt at realizing ICAI through a clustering-based methodology applied to preference datasets. The proposed approach involves embedding preference pairs into vector representations, clustering the embeddings to group related preferences, generating interpretable principles for each cluster using language models, and validating these principles against held-out samples. Empirical evaluation is conducted on the hh-rlhf dataset for training helpful and harmless AI assistants, as well as a synthetic dataset constructed by relabeling hh-rlhf samples with predefined principles. Results demonstrate promising capabilities in clustering semantically coherent topics and generating human-interpretable principles, while also highlighting limitations in achieving fully disentangled, principle-based clustering. Directions for future work are discussed, including soft clustering, bottom-up principle extraction, prompt optimization approaches, and sparse dictionary learning methods. In this work, I argue the following thesis: ICAI shows promise as a strategy to disentangle and explain the preferences represented in preference data. A clustering-based approach to ICAI, though, fails to successfully extract a constitution of principles from preference data, as a result of clustering occurring along the topics in the data instead of the preferences themselves.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Evaluation of an LLM-Based Tool for Automatically Building Web Applications</title>
<link href="https://hdl.handle.net/1721.1/156803" rel="alternate"/>
<author>
<name>Voronin, Diana Nguyen</name>
</author>
<id>https://hdl.handle.net/1721.1/156803</id>
<updated>2024-09-17T03:55:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development and Evaluation of an LLM-Based Tool for Automatically Building Web Applications
Voronin, Diana Nguyen
In this thesis, we present Kodless, a platform that enables users to automatically build web applications from natural language descriptions without requiring them to write, review, or debug the generated code. Kodless structures applications using concept design, a theory which views software as a collection of interacting yet independent units of functionality mapping to human behavior patterns. The platform leverages large language models to generate functional backend code, combining concept design principles with a robust framework for developing concept implementations and integrating them into a standardized application architecture. To evaluate Kodless's performance, we conduct a study in which we use the platform to develop an application through an iterative prompt refinement process. We argue that the case study illustrates the importance of concept-driven prompt engineering and offer guiding principles for designing effective prompts. Furthermore, this thesis contributes improvements to the Kodless platform, including extended support for MongoDB integration and the automatic generation of a frontend testing client. We also introduce a frontend code generation assistant to enable automatic generation of reactive user interfaces. Ultimately, Kodless represents a promising path towards changing how we approach AI driven software design and development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Utility Libraries for Traversing and Manipulating Tree-like Data Structures with Varying Schemas</title>
<link href="https://hdl.handle.net/1721.1/156802" rel="alternate"/>
<author>
<name>Janicki, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/156802</id>
<updated>2024-09-17T03:31:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Utility Libraries for Traversing and Manipulating Tree-like Data Structures with Varying Schemas
Janicki, Adam
Tree-like data structures are very commonly used data types found in the wild in a wide array of projects JavaScript projects. A specific example of one of these structures is an abstract syntax tree (AST). However, the lack of good libraries to handle trees has led to many developers and large-scale code bases having to implement their utility functions over and over again. To address these concerns within the JavaScript developer community, we propose Treecle and Vastly: two free open-source libraries that provide utility functions and operations to help developers work with trees and ASTs respectively.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Recommendation System for Ideation: Enhancing Supermind Ideator</title>
<link href="https://hdl.handle.net/1721.1/156801" rel="alternate"/>
<author>
<name>Papacica, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/156801</id>
<updated>2024-09-17T03:49:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Recommendation System for Ideation: Enhancing Supermind Ideator
Papacica, Daniel
Recommendation systems are widely utilized across various domains such as e-commerce, entertainment, and social media to enhance user experience by personalizing content and suggestions. Despite their widespread use, these systems are rarely applied to the ideation process, presenting unique challenges due to the inherently creative and complex nature of generating and developing novel ideas. This thesis details the creation and assessment of a recommendation system for the Supermind Ideator platform, aimed at enhancing the creative ideation processes. The recommendation system leverages machine learning techniques to dynamically adapt to user input statements based on statement "scope", a sub-task that is thoroughly explored and tested in this paper. "Scope" is then integrated into the recommendation system’s static rules-based algorithm to suggest the next best Supermind Design "move". This work not only contributes a practical tool to the field of ideation but also extends the theoretical understanding of recommendation systems in facilitating complex, subjective cognitive tasks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empowering Analog Integrated Circuit Design through Large Language Models and Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/156800" rel="alternate"/>
<author>
<name>Terpstra, Irene</name>
</author>
<id>https://hdl.handle.net/1721.1/156800</id>
<updated>2024-09-17T03:32:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Empowering Analog Integrated Circuit Design through Large Language Models and Reinforcement Learning
Terpstra, Irene
Analog Integrated Circuit design consists of several complex steps that are difficult to optimize. Automating the transistor sizing process specifically comes with many challenges. The problem has a large design space, requires complex performance trade-offs, and needs to adjust to rapidly advancing semiconductor technology. As a result, the task of sizing transistors is traditionally performed by experts with years of experience. Various optimization and reinforcement learning methods have been proposed to automate this process. While having shown great competency, these methods must learn complex circuit dynamics from scratch, resulting in black-box solutions. This thesis proposes that the background knowledge contained in Large Language Models (LLMs) can guide the decisions of circuit designers, and that this guidance can be used to improve the exploration efficiency of both mathematical optimizers and reinforcement learning algorithms. This thesis demonstrates that LLMs possess a foundational understanding of analog circuit design including circuit calculation and netlist comprehension. It also built a framework to integrate LLMs as heuristic tools with existing optimization methods. This is a first-of-its-kind exploration into linking LLMs with optimization techniques for analog circuit design. While the current experimental results do not show improvements in design quality or speed, this work establishes the groundwork for further advancements with more sophisticated or fine-tuned LLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coherency Loss for Hierarchical Time Series Forecasting</title>
<link href="https://hdl.handle.net/1721.1/156799" rel="alternate"/>
<author>
<name>Hensgen, Michael Lowell</name>
</author>
<id>https://hdl.handle.net/1721.1/156799</id>
<updated>2024-09-17T03:02:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Coherency Loss for Hierarchical Time Series Forecasting
Hensgen, Michael Lowell
In hierarchical time series forecasting, some series are aggregated from others, producing a known coherency metric between series. We present a new method for enforcing coherency on hierarchical time series forecasts. We propose a new loss function, called Network Coherency Loss, that minimizes the coherency loss of the weight and bias of the final linear layer of a neural network. We compare it against a baseline without coherency and a state of the art method that uses projection to strictly enforce coherency. We find that, by choosing our Network Coherency Loss parameters based on validation data, for four datasets of varying sizes we produce improved accuracy over our two benchmark models. We also find that, when compared to an alternative loss function also designed to produce coherency, our Network Coherency Loss function produces similar accuracies but improves the coherency on the test data.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representation Learning for Extrapolation via Bilinear Transduction</title>
<link href="https://hdl.handle.net/1721.1/156798" rel="alternate"/>
<author>
<name>Spiride, Andrei</name>
</author>
<id>https://hdl.handle.net/1721.1/156798</id>
<updated>2024-09-17T03:01:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Representation Learning for Extrapolation via Bilinear Transduction
Spiride, Andrei
Typical machine learning systems, such as deep neural networks, perform well at predicting on new examples that come from the same distribution as initial training data. However, these systems are not typically robust to examples that do not come from the same distribution as the training samples. These testing samples are characterized as out-of-distribution (OOD). Using a proven bilinear transduction [1] method for accurately predicting on OOD examples, we propose a method to apply this framework to learned representations instead of hand designed state representations. This work is geared towards enabling the bilinear transduction approach to generalize to a wider range of data types and tasks when such designed representations are not available. We use deep neural networks to learn representations of certain data types, such as images, and apply bilinear transduction to these learned representations. This has the potential to further expand the out-of-support prediction capabilities of the bilinear transduction framework.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Connecting Deep Learning Models to the Human Brain</title>
<link href="https://hdl.handle.net/1721.1/156797" rel="alternate"/>
<author>
<name>Subramaniam, Vighnesh</name>
</author>
<id>https://hdl.handle.net/1721.1/156797</id>
<updated>2024-09-17T03:24:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Connecting Deep Learning Models to the Human Brain
Subramaniam, Vighnesh
In this thesis, we introduce innovative methodologies for connecting new deep learning models, particularly models that integrate vision and language with human brain processing. These models have shown remarkable advancements in tasks such as object recognition, scene classification, and language processing, achieving near-human accuracy in some cases. This raises intriguing questions about how closely the computations and geometric structure of these models mirror that of the human brain. Our method starts with measuring brain activity in response to vision and language stimuli and then exposes these stimuli to deep learning models to collect their internal activations. We analyze the similarity between these activations and brain activity using a specific representational distance metric. We focus on introducing statistical algorithms to assess whether one model is significantly more similar with the brain than another. Through our novel methodology, we assess whether there’s a more significant correlation between brain regions and multimodal models compared to unimodal ones. Our investigation reveals brain areas associated with vision-language integration and models of vision-language integration that are potentially most similar to the brain.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing and Optimizing the Networking Stack in Databases</title>
<link href="https://hdl.handle.net/1721.1/156796" rel="alternate"/>
<author>
<name>Kafle, Prabhakar</name>
</author>
<id>https://hdl.handle.net/1721.1/156796</id>
<updated>2024-09-17T03:05:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Characterizing and Optimizing the Networking Stack in Databases
Kafle, Prabhakar
Databases are latency-critical applications, and client-database communication is a significant contributor to the end-to-end latency. However, the database community has paid little attention to the networking overhead in databases. This thesis focuses on the overhead from the network stack in the server. I characterize the contributions of different components in the database server to the end-to-end latency, focusing on the networking stack. I observe that in transactions involving a single read query, the server network stack accounts for almost 15\% of the total end-to-end latency in VoltDB. Most of this overhead comes from TCP packet processing, interrupt handling, context switches, and I/O multiplexing. Additionally, this work also explores avenues to optimize the networking stack overhead. I find that moving networking to the userspace by bypassing the kernel can significantly reduce the networking stack overhead. This switch in the network stack can help achieve a significant improvement in throughput and lower latency for both the benchmarks used. While the thesis is focused on server networking stack, similar optimization can be applied to client side if necessary hardware (CPU, NIC) is available.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Language Models to Understand Molecular Structures</title>
<link href="https://hdl.handle.net/1721.1/156795" rel="alternate"/>
<author>
<name>Fan, Vincent K.</name>
</author>
<id>https://hdl.handle.net/1721.1/156795</id>
<updated>2024-09-17T03:58:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Language Models to Understand Molecular Structures
Fan, Vincent K.
In data rich modalities such as text and images, large foundation models have demonstrated remarkable capabilities. However, in life sciences, datasets of comparable scale are prohibitively costly to assemble, pointing towards the imperative need to leverage advances in language modelling to improve machine learning techniques for life sciences. This thesis details research in two such directions, information extraction and text retrieval. Information extraction from chemistry literature is vital for constructing up-to-date reaction databases. Complete extraction requires combining information across text, tables, and figures, whereas prior work has mainly investigated extracting reactions from single modalities. In this thesis, I present OpenChemIE to address this complex challenge and enable the extraction of reaction data at the document level. OpenChemIE approaches the problem in two steps: extracting relevant information from individual modalities with specialized neural models and then integrating the results via chemistry-informed algorithms to obtain a final list of reactions. I meticulously annotated a challenging dataset of reaction schemes with R-groups to evaluate OpenChemIE, which achieves an F1 score of 69.5%. Additionally, the reaction extraction results of OpenChemIE attain an accuracy score of 64.3% when directly compared against the Reaxys chemical database. OpenChemIE is most suited for information extraction on organic chemistry literature, where molecules are generally depicted as planar graphs or written in text and can be consolidated into a SMILES format. Additionally, I detail preliminary research in developing a tool to retrieve full text documents that are relevant to specific protein sequences. I describe the dataset which is currently in construction, as well as experiments pointing at the promise of this approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpreting and Editing Memory in Large Transformer Language Models</title>
<link href="https://hdl.handle.net/1721.1/156794" rel="alternate"/>
<author>
<name>Meng, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/156794</id>
<updated>2024-09-17T04:02:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Interpreting and Editing Memory in Large Transformer Language Models
Meng, Kevin
This thesis investigates the mechanisms of factual recall in large language models. We first apply causal interventions to identify neuron activations that are decisive in a model’s factual predictions; surprisingly, we find that factual recall corresponds to a sparse, localizable computation in the MLP weights of the GPT models we study. Harnessing this insight, we then develop methods for efficiently and surgically inserting up to 10,000 new memories into a transformer; these methods perform well in terms of both generalization and specificity. We conclude with some directions for future work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analog Underwater Backscatter: Networked Underwater Sensing at Microwatt Power Levels</title>
<link href="https://hdl.handle.net/1721.1/156793" rel="alternate"/>
<author>
<name>Patnaik, Ritik</name>
</author>
<id>https://hdl.handle.net/1721.1/156793</id>
<updated>2024-09-17T03:32:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analog Underwater Backscatter: Networked Underwater Sensing at Microwatt Power Levels
Patnaik, Ritik
We present Analog Underwater Backscatter (AUB), the first technology for microwatt-level underwater wireless sensor networks. AUB departs from past underwater backscatter technologies in that it encodes sensor data directly into the physical layer through analog (frequency) modulation. Our design introduces multiple innovations that enable it to address challenges in practical underwater environments arising from mobility (Doppler shift) and the low-frequency carrier, which makes it vulnerable to small hardware imperfections. AUB’s design also introduces the first ultra-low-power wakeup receiver for underwater backscatter, enabling it to operate for a long time on small batteries. We built an end-to-end prototype of AUB and evaluated it in a river. Our evaluation demonstrates that AUB consumes 5.9 µW, 46× lower power than state-of-the-art past underwater backscatter systems. We also demonstrate AUB’s ability to sense two of the most important oceanographic vitals: temperature and depth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning an Embedding for Vehicle Telematics</title>
<link href="https://hdl.handle.net/1721.1/156792" rel="alternate"/>
<author>
<name>Leonard, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/156792</id>
<updated>2024-09-17T03:59:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning an Embedding for Vehicle Telematics
Leonard, Matthew
Vehicular telematics involves the collection and processing of data about driving behavior; however, mining and modeling this data is difficult due to its large volume. We hypothesize that the data will follow regular patterns of events that occur during drives, and that we can learn these patterns. With this intuition, we design a neural network that will effectively embed sections of accelerometer data into a lower-dimensional space, with a low loss of information and accuracy of the embedding relative to the dimensionality reduction, as well as several other desirable geometric properties for indexing and analysis of the data. We further develop an accurate summary of the distribution of each drive in this lower-dimensional space, which would serve as a proxy for the occurrence of events within these drives. From this system, we develop a method of comparison between different drives that highlights whether or not particular events occurred in each drive. This could be used to develop a more robust and nuanced risk model, and determine which events in a drive are associated with risk, to provide feedback to end users on their driving.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Max 2SAT-3, Net, Euclidea: Techniques and Results in Computational Inapproximability</title>
<link href="https://hdl.handle.net/1721.1/156791" rel="alternate"/>
<author>
<name>Luo, Victor</name>
</author>
<id>https://hdl.handle.net/1721.1/156791</id>
<updated>2024-09-17T03:13:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Max 2SAT-3, Net, Euclidea: Techniques and Results in Computational Inapproximability
Luo, Victor
This Master’s thesis investigates three diverse problem domains through the lens of computational inapproximability: Max 2SAT-3, the Net tile-rotating puzzle family, and the mobile game Euclidea. Max 2SAT-3 is a problem long known to be APX-complete, but finding a clear proof is harder than one might expect. We examine the history of Max 2SAT-3, addressing past misconceptions and clarifying where the reduction chain has been opaque, and present a novel proof of its APX-completeness. Net variants form a wide class of puzzles with lots of potential for future research. We introduce a natural optimization variant of Net and demonstrate its inapproximability, as well as consolidate existing findings and present other new results. Euclidea is a mobile game based on Euclidean straightedge-and-compass constructions. We define the game as an optimization problem and establish its APX-hardness, as well as discuss challenges in upper-bounding its complexity, relating to current knowledge gaps regarding the constructible and algebraic numbers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling the Rust Compiler to Reason about Fork/Join Parallelism via Tapir</title>
<link href="https://hdl.handle.net/1721.1/156790" rel="alternate"/>
<author>
<name>Hilton, Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/156790</id>
<updated>2024-09-17T03:03:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enabling the Rust Compiler to Reason about Fork/Join Parallelism via Tapir
Hilton, Jay
Rust + Cilk is an extension to the Rust language incorporating Cilk’s keywords for language level parallelism. The Rust + Cilk compiler leverages the Rust compiler’s static verification of data race freedom and the OpenCilk parallelism platform’s strong theoretical guarantees for performance of parallel programs. I compare Rust + Cilk to existing librarybased parallelism solutions in Rust such as Rayon, as well as to C programs parallelized with OpenCilk, based on performance and ergonomics. I find that Rust + Cilk exhibits marginally worse performance than Rayon, although I expect these differences are possible to bridge with further work. Additionally, Rust + Cilk has ergonomic advantages for some kinds of parallel programs. I outline further research that could make Rust + Cilk a more complete and performant system to further take advantage of the benefits language-based parallelism solutions can offer while statically verifying data race freedom.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Control Signals for Reconstruction-based Time Series Anomaly Detection</title>
<link href="https://hdl.handle.net/1721.1/156789" rel="alternate"/>
<author>
<name>Song, Grace Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156789</id>
<updated>2024-09-17T04:07:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling Control Signals for Reconstruction-based Time Series Anomaly Detection
Song, Grace Y.
Automated time series anomaly detection methods can provide insights while reducing the load placed on human experts in a variety of settings. Machine-generated signals, such as those produced by sensors, often contains control signals in addition to the target observation signal. These signals may provide additional insight about the normal vs. abnormal properties of the observation signal. Despite this fact, even recent anomaly detection methods using deep learning give limited consideration to the relationship between observation and control signals, often failing to handle the control signal at all. This work proposes pre-processing, modeling, and evaluation methods for multivariate, heterogeneous time series to examine how using information from the control signal can improve anomaly detection. We develop a deep learning reconstruction-based pipeline and test its performance on the NASA Soil Moisture Active Passive (SMAP) satellite and the Mars Science Laboratory (MSL) Rover, which contains heterogeneous sensing data from exploratory missions. The pipeline follows the Sintel machine learning framework and is accessible through the Meissa library, which builds on the capabilities of the open-source library Orion for end-to-end unsupervised time series anomaly detection pipelines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Mechanistic Interpretability for Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/156787" rel="alternate"/>
<author>
<name>Liao, Isaac C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156787</id>
<updated>2024-09-17T03:36:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automated Mechanistic Interpretability for Neural Networks
Liao, Isaac C.
Mechanistic interpretability research aims to deconstruct the underlying algorithms that neural networks use to perform computations, such that we can modify their components, causing them to change behavior in predictable and positive ways. This thesis details three novel methods for automating the interpretation process for neural networks that are too large to manually interpret. Firstly, we detect inherently multidimensional representations of data; we discover that large language models use circular representations to perform modular addition tasks. Secondly, we introduce methods to penalize complexity in neural circuitry; we discover the automatic emergence of interpretable properties such as sparsity, weight tying, and circuit duplication. Last but not least, we apply neural network symmetries to put networks into a simplified normal form, for conversion into human-readable python; we introduce a program synthesis benchmark with this and successfully convert 32 out of 62 of them.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reed-Relay Switched Tuning Circuit for Stretchable RF Coils in Low Field, Portable MRI</title>
<link href="https://hdl.handle.net/1721.1/156786" rel="alternate"/>
<author>
<name>Nwigwe, Alexandra C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156786</id>
<updated>2024-09-17T03:08:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reed-Relay Switched Tuning Circuit for Stretchable RF Coils in Low Field, Portable MRI
Nwigwe, Alexandra C.
While MRI (Magnetic Resonance Imaging) technology allows us to get detailed images of the inside of a subject’s body, it most commonly requires very expensive and large-scale machinery which limits the scenarios it can be used in. These types of costly MRI are usually high-field MRI, which operates at magnetic fields of 1.5T and above, and produces images with short scan times and high resolution. Yet because of the drawbacks in accessibility and affordability high-field MRI poses, there has been an effort to devote more research to portable low-field MRI. Low-field MRI opens doors for low-cost and point-of-care imaging but it unfortunately comes at the expense of decreased image quality and greater noise interference. An RF head coil that molds to the user’s head would be able to better excite and receive signal from the subject and counteract some of the inherent disadvantages of low-field MRI. My proposed thesis will pursue the idea of using flexible, subject-adaptable RF head coils in conjunction with an autotuning circuit as a way to extract better signal from a subject at low magnetic fields.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Algorithms for Mixtures of Linear Dynamical Systems: A Practical Approach</title>
<link href="https://hdl.handle.net/1721.1/156785" rel="alternate"/>
<author>
<name>Kumar, Nitin A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156785</id>
<updated>2024-09-17T03:44:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning Algorithms for Mixtures of Linear Dynamical Systems: A Practical Approach
Kumar, Nitin A.
In this work, we give the first implementation of an algorithm to learn a mixture of linear dynamical systems (LDS’s), and an analysis of algorithms to learn a single linear dynamical system. Following the work of Bakshi et al. ([1]), we implement a recent polynomial-time algorithm based on a tensor decomposition with learning guarantees in a general setting, with some simplifications and minor optimizations. Our largest contribution is giving the first expectation-maximization (E-M) algorithm for learning a mixture of LDS’s, and an experimental evaluation against the Tensor Decomposition algorithm. We find that the E-M algorithm performs extremely well, and much better than the Tensor Decomposition algorithm. We analyze performance of these and other algorithms to learn both a single LDS and a mixture of LDS’s under various conditions (such as how much noise is present) and algorithm settings.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Guiding Nonconvex Trajectory Optimization with Hierarchical Graphs of Convex Sets</title>
<link href="https://hdl.handle.net/1721.1/156783" rel="alternate"/>
<author>
<name>von Wrangel, David</name>
</author>
<id>https://hdl.handle.net/1721.1/156783</id>
<updated>2024-09-17T03:11:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Guiding Nonconvex Trajectory Optimization with Hierarchical Graphs of Convex Sets
von Wrangel, David
Collision-free motion planning with trajectory optimization is inherently nonconvex. Some of this nonconvexity is fundamental: the robot might need to make a discrete decision to go left around an obstacle or right around an obstacle. Some of this nonconvexity is potentially more benign: we might want to penalize high-order derivatives of our continuous trajectories in order to encourage smoothness. Recently, Graphs of Convex Sets (GCS) have been applied to trajectory optimization, addressing the fundamental nonconvexity with efficient online optimization over a "roadmap" represented by an approximate convex decomposition of the configuration space. In this thesis, we explore some of the most useful nonconvex costs and constraints and introduce a novel hierarchical GCS structure, composing subgraphs that represent different task phases or alternative paths and enabling efficient planning for complex tasks involving both discrete decision-making and continuous trajectory generation. We investigate the suitability of combining convex "global" optimization using GCS with nonconvex trajectory optimization for rounding the local solutions. Through extensive experiments on diverse robotic systems, we demonstrate that this combination can effectively guide a small number of nonconvex optimizations, ultimately finding high-quality solutions to challenging nonconvex motion planning problems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Test Suite for Saliency Method Evaluation Metrics</title>
<link href="https://hdl.handle.net/1721.1/156781" rel="alternate"/>
<author>
<name>Kaspar, Moulinrouge</name>
</author>
<id>https://hdl.handle.net/1721.1/156781</id>
<updated>2024-09-17T03:34:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Test Suite for Saliency Method Evaluation Metrics
Kaspar, Moulinrouge
This thesis introduces a structured test suite designed to evaluate the input sensitivity of saliency methods, a crucial factor when interpreting machine learning models, particularly in high-stakes environments. Saliency methods, by highlighting essential input features inf luencing model decisions, serve as a key tool for understanding model behavior. Yet, their effectiveness can vary, often presenting challenges in selection due to their inconsistent reliability and the potential for unfaithful representations of model dynamics. To address these challenges, our work enhances the process of selecting and applying saliency methods by rigorously testing their response to input perturbations, from adversarial modifications to minor variations. This test suite specifically assesses aspects such as completeness, deletion, faithfulness, and robustness across various data types—including textual and image data—and model architectures like convolutional and transformer models. We demonstrate the utility of the test suite by using it to compare how different saliency methods, as well as the same method across different architectures, behave under varied conditions. Our findings reveal significant variations in how these methods respond to changes in input data, providing insights that guide users in choosing more reliable techniques for interpreting model decisions. This facilitates a deeper understanding of which methods are best suited for specific tasks and promotes the selection of techniques that enhance the transparency and accountability of AI systems. Ultimately, this thesis contributes to advancing ethical compliance and fostering trust in automated decision-making processes by providing a comprehensive evaluation platform for saliency methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extensible Real-Time Sensor and Test Interface for a System-on-Chip</title>
<link href="https://hdl.handle.net/1721.1/156777" rel="alternate"/>
<author>
<name>Studer, Alexandre S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156777</id>
<updated>2024-09-17T03:32:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extensible Real-Time Sensor and Test Interface for a System-on-Chip
Studer, Alexandre S.
This thesis describes the development of a printed circuit board (PCB) that enables connecting external sensors and a host computer to a custom Application-Specific Integrated Circuit (ASIC). The ASIC, previously developed by the Low-Energy Autonomy and Navigation research group, is designed for autonomous navigation on microrobots, such as drones. To enable the real-time data processing required for this application, the ASIC includes a custom Sensor-and-Debug IP block that provides Serial Peripheral Interface (SPI) and First-In/First-Out (FIFO) buses. The custom PCB includes a multiplexer circuit that allows multiple sensors to be connected to the ASIC's single SPI bus. It also includes a USB-to-FIFO interface, developed around the RP2040 microcontroller, which enables connecting a host computer to the ASIC's FIFO bus. Ultimately, the PCB simplifies the connection of external sensors, facilitates debugging of the ASIC, and can be miniaturized for mounting on an autonomous microrobot, such as a drone, in the future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing Control-oriented Meta-learning on Hardware</title>
<link href="https://hdl.handle.net/1721.1/156775" rel="alternate"/>
<author>
<name>Sohn, Joshua C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156775</id>
<updated>2024-09-17T03:33:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Implementing Control-oriented Meta-learning on Hardware
Sohn, Joshua C.
Unpredictable weather conditions pose a daunting challenge for the robust control of unmanned aerial vehicles, also known as drones. The control-oriented meta-learning algorithm aims to solve this problem by learning a controller that can adapt to dynamic environments. This algorithm has already been derived and simulated for a two-dimensional model. This project explores the implementation of the control-oriented meta-learning algorithm on a hardware platform. After extending the algorithm to a three-dimensional model, it was tested in a physics-based simulator and deployed on a hexarotor in the real world. Both in simulation and in real life, the learned controller outperformed a traditional controller in the presence of wind.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Multistage Compilation of Machine Learning&#13;
Computation Graphs</title>
<link href="https://hdl.handle.net/1721.1/156774" rel="alternate"/>
<author>
<name>Dighe, Kaustubh</name>
</author>
<id>https://hdl.handle.net/1721.1/156774</id>
<updated>2024-09-17T04:07:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Fast Multistage Compilation of Machine Learning&#13;
Computation Graphs
Dighe, Kaustubh
Machine learning applications are increasingly requiring fast and more computational power. Many applications like language models have become so large that they are run on distributed systems in parallel. However, getting into the details of optimally scheduling or even just running machine learning models on distributed systems can be a distraction for researchers ideating models. Hence there has been development of abstractions to facilitate running machine learning models in parallel on distributed systems. We present a compiler for the StreamIt language- a language made for abstract signal processing and multicore programming. We use that abstraction as a way to distribute the computation of machine learning models programmed in PyTorch.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soccer Last Touch and Automatic Event Detection with Skeletal Tracking Data</title>
<link href="https://hdl.handle.net/1721.1/156773" rel="alternate"/>
<author>
<name>Bian, George C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156773</id>
<updated>2024-09-17T03:29:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Soccer Last Touch and Automatic Event Detection with Skeletal Tracking Data
Bian, George C.
With the rapid growth of soccer data collection technology worldwide, there has come about an increasing need for new efficient methods to analyze match data. This would help soccer stakeholders more easily and efficiently scrutinize game events for strategy improvement and individual player evaluation. Currently, most existing event data is annotated manually by hand, which is an extremely time-consuming task. Recent works in automatic event generation leverage decision tree algorithms to partially identify game events from player center of mass and ball tracking data, but have shown to be limited in accuracy in practice. New computer vision models have enabled the extraction of player joint data from video broadcast, providing a newer, richer dataset for automatic event detection. The proposed thesis will seek to validate brand-new skeletal joint data, determine the last player to touch the ball at any timestamp during a match, and build a decision tree algorithm for classifying duel-like events and goalkeeping outcomes with the additional context of player joint location.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Patient Outcomes in the EPOCH Clinical Trial</title>
<link href="https://hdl.handle.net/1721.1/156772" rel="alternate"/>
<author>
<name>Parsan, Nithin</name>
</author>
<id>https://hdl.handle.net/1721.1/156772</id>
<updated>2024-09-17T03:27:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Predicting Patient Outcomes in the EPOCH Clinical Trial
Parsan, Nithin
Metastatic colorectal cancer (mCRC) has a poor prognosis and high mortality rate, but innovative therapies such as transarterial radioembolization (TARE) can improve patient outcomes. The EPOCHclinical trial demonstrated that TARE improved hepatic progressionfree survival (hPFS) in patients with colorectal liver metastases, and computational methods to analyze the multimodal data collected can identify patient subgroups and predict treatment response for personalized medicine. First, a comprehensive data preprocessing pipeline curated a high-quality dataset of liver-region Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scans paired with patient biomarkers. Multi-Dimensional Subset Scanning (MDSS) identified a group of patients with shared biomarkers that exhibited poor response to TARE, and Cox Proportional Hazards (CoxPH) modeling revealed hazard ratios for biomarkers aligning with clinical expectations, albeit with a limited C-index. Augmenting CoxPH modeling with embeddings from a deep learning foundation model pre-trained on liver CT and MRI scans and fine-tuned to predict treatment response resulted in a substantially higher C-index. Interestingly, models fine-tuned to predict one clinical feature had improved predictive accuracy for other features they were not specifically trained on, and Class Activation Mapping (CAM) visualizations showed that salient embedding dimensions focus on the liver region, providing interpretability. The ensemble of computational techniques applied to multimodal clinical trial data successfully identified patient subgroups, extracted predictive biomarkers, and enhanced the accuracy of treatment response predictions, contributing to the development of more effective, personalized treatment strategies for mCRC patients undergoing TARE.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extended Evaluation: Unraveling Medicaid Patient Trajectories and Improving Intervention Candidate Identification</title>
<link href="https://hdl.handle.net/1721.1/156771" rel="alternate"/>
<author>
<name>Joglekar, Natasha</name>
</author>
<id>https://hdl.handle.net/1721.1/156771</id>
<updated>2024-09-17T03:22:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extended Evaluation: Unraveling Medicaid Patient Trajectories and Improving Intervention Candidate Identification
Joglekar, Natasha
We seek to conduct an analysis of the Camden Coalition’s Health Information Exchange (HIE) data to gain deeper insights into the trajectories of Medicaid patients through the health system. Recognizing the complex challenges of social determinants of health, this study seeks to find patterns and opportunities within the Medicaid population’s healthcare journeys. Through time series analysis we try to understand the utilization trajectories of Medicaid patients over time. Using this insight combined with predictive modeling, we then begin to develop a methodology for identifying persistent high-cost healthcare utilization, and think about how having this information may change program implementation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Learning Genetic Dependencies</title>
<link href="https://hdl.handle.net/1721.1/156769" rel="alternate"/>
<author>
<name>Cai, Cathy</name>
</author>
<id>https://hdl.handle.net/1721.1/156769</id>
<updated>2024-09-17T03:05:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Learning Genetic Dependencies
Cai, Cathy
Synthetic lethality refers to a genetic interaction where the simultaneous perturbation of gene pairs leads to cell death. Synthetically lethal gene pairs (SL pairs) provide a potential avenue for selectively targeting cancer cells based on genetic vulnerabilities. The rise of large-scale gene perturbation screens such as the Cancer Dependency Map (DepMap) offers the opportunity to identify SL pairs automatically using machine learning. We build on a recently developed class of feature learning kernel machines known as Recursive Feature Machines (RFMs) to develop a pipeline for identifying SL pairs based on CRISPR viability data from DepMap. In particular, we first train RFMs to predict viability scores for a given CRISPRgene knockout from cell line embeddings consisting of gene expression and mutation features. After training, RFMs use a statistical operator known as average gradient outer product to provide weights for each feature indicating the importance of each feature in predicting cellular viability. We subsequently apply correlation-based filters to re-weight RFMfeature importances and identify those features that are most indicative of low cellular viability. Our resulting pipeline is computationally efficient, taking under 3 minutes for analyzing all 17,453 knockouts from DepMap for candidate SL pairs. We show that our pipeline more accurately recovers experimentally verified SL pairs than prior approaches. Moreover, our pipeline finds new candidate SL pairs, thereby opening novel avenues for identifying genetic vulnerabilities in cancer.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aquaculture Basket Detection and Tracking for Autonomous Surface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/156768" rel="alternate"/>
<author>
<name>Gillespie, Fiona J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156768</id>
<updated>2024-09-17T04:05:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Aquaculture Basket Detection and Tracking for Autonomous Surface Vehicles
Gillespie, Fiona J.
With the global population on the rise, there is an increased demand for seafood, underscoring the crucial role of aquaculture- the practice of farming aquatic organisms [1]. In the realm of aquaculture, oyster farming is relatively low maintenance, except for the challenge of manually flipping heavy oyster-laden bags. To address this issue, MIT Sea Grant introduced the Oystermaran, an autonomous catamaran specifically designed for this task. This thesis presents contributions to the electronics, controls, and perception systems of the Oystermaran project. In particular, it presents an oyster basket detection and tracking method using the object detector You Only Look Once (YOLO) [2]. In addition, the electronics system has been updated and new manual controllers were created to enable the use of a new f lipping mechanism developed this year. This system is evaluated on data from field testing at Ward Aquafarms, a Cape Cod-based oyster farming business. The results show that oyster baskets can be robustly detected in new environments, despite environmental factors. This marks a significant step towards real-time viability for autonomous oyster farming.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Self-Supervised Audio-Visual Speech Diarization and Recognition</title>
<link href="https://hdl.handle.net/1721.1/156767" rel="alternate"/>
<author>
<name>Wongprommoon, Arun</name>
</author>
<id>https://hdl.handle.net/1721.1/156767</id>
<updated>2024-09-17T03:01:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Self-Supervised Audio-Visual Speech Diarization and Recognition
Wongprommoon, Arun
Many real world use cases of automatic speech recognition (ASR) contain video and multiple speakers, such as TV broadcasts and video conferences. However, state-of-the-art end-to-end multimodal ASR models generally do not support diarization. This thesis extends one such model, AV-HuBERT, to address the diarization problem while maintaining word recognition accuracy. The proposed Audio-Visual Cocktail (AVC) HuBERT model extends video input dimenions, lengthens feature size, and adds projection layers to split outputs into corresponding speakers. A complementary synthesized dataset is constructed by mixing audio and video samples from LRS3 at varying overlap thresholds, resulting in the LRS3Mix dataset. This is used to train the model, whose weights are transferred from AV-HuBERT. Computing several word error rate (WER) metrics to measure recognition and diarization performance of several versions of AVC-HuBERT models demonstrates that the method improves diarization, albeit with a small tradeoff in word recognition. Augmenting the synthesized mixed dataset with the original clean single-speaker dataset boosts recognition ability, and the same effect can be observed when the dataset size increases.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using heterogeneous Graph Neural Networks(hGNN) to predict cell-cell communication</title>
<link href="https://hdl.handle.net/1721.1/156766" rel="alternate"/>
<author>
<name>Yan, Binwei</name>
</author>
<id>https://hdl.handle.net/1721.1/156766</id>
<updated>2024-09-17T03:02:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using heterogeneous Graph Neural Networks(hGNN) to predict cell-cell communication
Yan, Binwei
This thesis investigates diverse computational methodologies for modeling cellular interactions using single-cell RNA sequencing (scRNA-seq) data. We evaluate the performance of Graph Neural Networks (GNNs) both with and without gene-gene edges, Contrastive Learning, and Variational Autoencoders (VAEs) across multiple datasets. Our study compares these methods and establishes benchmarks for assessing their effectiveness beyond traditional case studies. By integrating extensive signaling pathway data, we aim to unveil complex cell-cell communication patterns and regulatory mechanisms that conventional scRNA-seq analysis methods might overlook. Our approach emphasizes the use of spatial data as a crucial indicator, facilitated by the advanced capabilities of heterogeneous GNNs to model physical proximity. We found that our analysis of the functioning genes aligns with previous findings, proving our model’s effectiveness as a potential method for further analyze communication mechanisms.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Privacy Perserving Payments</title>
<link href="https://hdl.handle.net/1721.1/156765" rel="alternate"/>
<author>
<name>Ali, Ayesha</name>
</author>
<id>https://hdl.handle.net/1721.1/156765</id>
<updated>2024-09-17T03:37:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Scaling Privacy Perserving Payments
Ali, Ayesha
We explore privacy-preserving payments in a centralized setting, such as CBDCs. Specifically, we focus on two classes of designs that hide the transaction graph: Chaumian e-cash and Merkle tree-based systems (e.g., Tornado Cash), which differ both in their security assumptions and scalability. In our work we highlight scalability limitations in Merkle tree-based privacy systems that would be encountered in a network as large as a CBDC, and propose a sharded Merkle tree design to improve scalability while maintaining strong privacy. However, as we analyze, conventional sharding methods pose privacy risks, prompting introduction of a ’tree of sharded trees’ design that preserves privacy at a modest increase of latency. We describe, implement and evaluate all three designs, and find that unmodified Tornado Cash indeed suffers from resource-contention induced scalability bottlenecks. In contrast, our new design is achieves throughput that is less than an order of magnitude away from e-cash, despite providing auditability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SAGE: Segmenting and Grouping Data Effectively using Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156764" rel="alternate"/>
<author>
<name>Pedraza Pineros, Isabella</name>
</author>
<id>https://hdl.handle.net/1721.1/156764</id>
<updated>2024-09-17T04:02:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">SAGE: Segmenting and Grouping Data Effectively using Large Language Models
Pedraza Pineros, Isabella
Grouping is a technique used to organize data into manageable pieces, reducing cognitive load and enabling users to focus on discovering higher-level insights and generating new questions. However, creating groups remains a challenge, often requiring users to have prior domain knowledge or an understanding of the underlying structure of the data. We introduce SAGE, a novel technique that leverages the knowledge base and pattern recognition abilities of large language models (LLMs) to segment and group data with domainawareness. We instantiate our technique through two structures: bins and highlights; bins are contiguous, non-overlapping ranges that segment a single field into groups; highlights are multi-field intersections of ranges that surface broader groups in the data. We integrate these structures into Olli, an open-source tool that converts data visualizations into accessible, keyboard-navigable textual formats to facilitate a study with 15 blind and low-vision (BLV) participants, recognizing them as experts in assessing agency. Through this study, we evaluate how SAGE impacts a user’s interpretation of data and visualizations, and find our technique provides a rich contextual framework for users to independently scaffold their initial sensemaking process.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Genetic Basis of Sex Differences in Human Height</title>
<link href="https://hdl.handle.net/1721.1/156763" rel="alternate"/>
<author>
<name>Aluru, Amulya S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156763</id>
<updated>2024-09-17T03:51:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding the Genetic Basis of Sex Differences in Human Height
Aluru, Amulya S.
Sex differences are prevalent across health, development and disease. Driven by the sex chromosomes, the largest source of genetic variation in the human population, trait differences between males and females can have important implications in treatment response and disease diagnosis. Genes along the X and Y chromosomes encode broadly-expressed regulators of the transcriptome and epigenome that have diverged in function and expression. These sex chromosome-linked gene pairs enforce differences in regulatory landscapes and autosomal gene expression patterns between biological males (XY) and females (XX), which can have far-reaching consequences. Despite this, the field of population genetics has rarely considered the special role of sex-linked loci and sex-biased genetic effectors in establishing sex-dependent trait variation.  In this thesis, I integrate existing tools in statistical genetics for the repurposed goal of understanding the genetic basis of sex differences in complex traits. Through combining genome-wide association study (GWAS) data with gene expression panels and sex-biased gene expression information, previous work in the lab has demonstrated that genes with conserved sex bias contribute to the establishment of sex bias in height. First, to understand the relationship between GWAS power and sex differences, we compared the performance of two differently powered GWAS in their ability to explain sex bias in height, finding a modest increase in genetic insight by the larger GWAS. Second, we assessed functional elements across the genome that may differentially contribute to height between males and females to propose alternative mechanisms alongside gene expression that may establish sex differences in height. Altogether, the work presented in this thesis demonstrates the potential of sex differences research to utilize well-powered studies of sex-biased regulators and variant-trait associations to better understand the genetic mechanisms— including, but not limited, to gene expression— that cultivate and maintain sex differences in complex traits.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monte Carlo Tree Search Applications to Neural Theorem Proving</title>
<link href="https://hdl.handle.net/1721.1/156761" rel="alternate"/>
<author>
<name>LaBelle, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/156761</id>
<updated>2024-09-17T03:02:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Monte Carlo Tree Search Applications to Neural Theorem Proving
LaBelle, Ethan
A common problem of LLM inference is hallucination, where models generate false information. Another such problem is the tradeoff between model size and computational cost. Larger models use more VRAM, in addition to requiring longer training and inference times. This work explores solutions to these problems, namely search and verification, following Yang’s recent contribution: LeanDojo: Theorem Proving with Retrieval-Augmented Language Models. In their work, Yang et al. introduce LeanDojo, an environment for programmatic interaction with the Lean theorem proving language, alongside ReProver, a ByT5-Small transformer-based ATP fine-tuned using the open source Lean mathlib. The smaller model requires fewer resources, enabling faster inference, which when combined with search, improves the effective performance of the model. We use the language model to generate a space of partial proof trees in Lean. As the core GPT can be interchanged with a larger or more performant model, this work focuses on search algorithms for finding novel proofs given the same computational budget. Three classes of algorithms are explored: best first search, random walk, and Monte Carlo Tree Search. Search algorithms are evaluated on the random split test dataset of the LeanDojo Benchmark. Finally, we present common failure modes of various methods, search results of algorithm variants, and novel proofs discovered relative to the baseline. Across our trials, we show the search space defined by ReProver’s tactic generator contains proofs for approximately 55.0% of theorems in the LeanDojo Benchmark random test split. In Yang’s evaluations, ReProver achieves a 51.2% solve rate Pass@1 on this benchmark.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Changes in Individual Wellbeing Scores: Mixed Effects Models using Sleep Data from Wearables</title>
<link href="https://hdl.handle.net/1721.1/156760" rel="alternate"/>
<author>
<name>Choi, Shelley</name>
</author>
<id>https://hdl.handle.net/1721.1/156760</id>
<updated>2024-09-17T03:41:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Predicting Changes in Individual Wellbeing Scores: Mixed Effects Models using Sleep Data from Wearables
Choi, Shelley
Sleep plays a major role in regulating human cognitive function, performance, mood, and well-being. Despite its significance, the intricate relationship between various sleep components—such as duration, quality, and regularity—and wellbeing outcomes remains inadequately explored. The nature of sleep data poses challenges in capturing and interpreting temporal patterns, but the growing popularity of wearable devices capable of collecting vast multi-modal data presents a promising avenue to bridge this gap. In this thesis, the aim is two-fold: first, identify the impact of different combinations and transformations of sleep regularity (Sleep Regularity Index- SRI, Composite Phase Deviation- CPD, Interdaily Stability- IS) and duration calculated from wearable devices across varying time frames on self-reported morning wellbeing scores (alertness, happiness, energy, health, calmness); and second, evaluate both linear and nonlinear associations between different sleep metrics and wellbeing. To address high user variability found by the personalized nature of sleep and the subjective nature of wellbeing assessments, we employ mixed effects modeling techniques where each individual is treated as their own cluster, including Linear Mixed Effects models (LMM) and Mixed Effects Random Forest (MERF), where the latter is benchmarked against classic machine learning models. The LMM results were most statistically significant for independent regularity (SRI, IS), combined regularity (SRI and IS), total sleep time as duration (TST), and combined regularity and total sleep time (SRI and TST, IS and TST) for alertness and energy over 2-4 nights. MERF outperformed other models in Mean Absolute Error (MAE), for all time split scenarios. This research further emphasizes the importance of addressing data leakage due to the time sensitivity of sleep data and calculation of regularity spanning multiple days. Bye stablishing correlations between sleep parameters and wellbeing indicators, this study hopes to provide deeper insights into fluctuations in wellbeing and inform the development of wearables that monitor sleep patterns.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Code Summarization and Program Synthesis with Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156757" rel="alternate"/>
<author>
<name>Lam, Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/156757</id>
<updated>2024-09-17T03:59:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Code Summarization and Program Synthesis with Large Language Models
Lam, Kelly
Automatic source code summarization and generation are naturally complimentary operations because they bridge the gap between natural-language text and executable programs, allowing users to flow between the two modes. Even though large language models, have become increasingly popular, it is unclear how effective they are with code summarization and generation, especially as we examine longer source code segments or more complicated prompts for generation. In this thesis, we will formalize the automatic code summarization and generation problems, identify some cases where large-language models can perform poorly, propose some techniques to correct the initial bad results, and evaluate our results against appropriate baselines using suitable evaluation metrics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Spatial Transcriptomics Data for Cross-Species Molecular Region Comparison</title>
<link href="https://hdl.handle.net/1721.1/156756" rel="alternate"/>
<author>
<name>Li, Bridget</name>
</author>
<id>https://hdl.handle.net/1721.1/156756</id>
<updated>2024-09-17T03:47:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Integrating Spatial Transcriptomics Data for Cross-Species Molecular Region Comparison
Li, Bridget
Comparative analysis of brain patterns across species can advance understanding of different biological processes and functions. Spatially resolved transcriptomics (SRT) technologies present the ability to measure gene expression of single cells within tissues, enabling the detection of unique spatial molecular patterns in the brain. Several computational methods that rely on cellular neighborhood information have been developed for characterizing molecular tissue regions in SRT data. Here, we show that spatial integration (SPIN) improves the performance of existing methods and enables the clustering of molecular tissue regions. Then, we test SPIN and signal-processing approaches on SRT data from mouse and macaque brains. We integrate the brain atlases of these two species to identify shared and distinct spatial molecular patterns. This work offers new insights into spatial molecular features between mouse and macaque brains and proposes a framework for integrating SRT datasets on a large scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Algorithmic Progress in Data Structures and Approximation Algorithms</title>
<link href="https://hdl.handle.net/1721.1/156755" rel="alternate"/>
<author>
<name>Li, Jeffery</name>
</author>
<id>https://hdl.handle.net/1721.1/156755</id>
<updated>2024-09-17T03:29:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On Algorithmic Progress in Data Structures and Approximation Algorithms
Li, Jeffery
In the big data regime, computer systems and algorithms must process large amounts of data, making many traditional exact algorithms too costly to run. To work around this, researchers have developed approximation algorithms, which trade off some accuracy for asymptotic improvements in runtime, and data structures, which can efficiently store and answer multiple queries about a dataset. This naturally leads to the question, how have approximation algorithms and data structures improved over the years? Here, we provide some insight into this question, looking into trends in algorithmic and data structure progress, tradeoffs between speed and accuracy or between runtimes of specific data structure operations, and specific problems of interest. Our analysis is based on a dataset of around 300 approximation algorithms and around 250 data structures. For both fields, we find that research is still fairly active even to the present day, even though significant or asymptotic gains for data structures have been slowly on the decline. Improvements have also been fairly heterogeneous– some problems see a lot of work and improvements put into them, while others have not seen as much progress. In addition, of the problems that have both exact and approximation algorithms, around 1/6 of the problems have seen approximation algorithms have immensely large average yearly improvement rates compared to exact algorithms, while around 1/2 of the problems have seen approximation algorithms have minimal improvement over exact algorithms. For data structures, we find that only 4 out of the 28 abstract data types in our dataset have ever had a tradeoff between storage requirements and/or runtimes of specific operations, with only 2 still existing in the present, suggesting that improvements generally build off of each other without increasing space usage or time required for other operations. This research helps us understand how approximation algorithms and data structures have progressed through the years and how they are now.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving LLM Long Context Understanding via Synthetic Data and Adaptive Compression</title>
<link href="https://hdl.handle.net/1721.1/156754" rel="alternate"/>
<author>
<name>Li, Jerry</name>
</author>
<id>https://hdl.handle.net/1721.1/156754</id>
<updated>2024-09-17T03:04:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving LLM Long Context Understanding via Synthetic Data and Adaptive Compression
Li, Jerry
Recent innovations in large language models (LLMs) have led to their widespread use, but the long context problem remains a fundamental challenge. Transformer-based LLMs are constrained by the quadratic scaling of the self-attention mechanism, which restricts most popular LLMs to a context length of several thousand tokens. Many methods have been introduced to extend the context of LLMs, including the Activation Beacon approach. In this work, we propose two key advancements to the existing methodology. First, we generate long context synthetic data across a variety of tasks for training context-extended models, which can supplement or even replace expensive human-annotated data. Second, we introduce a novel two-pass, adaptive compression technique for more intelligent compression of long contexts. We find that the two strategies lead to orthogonal performance improvements on real-world long context tasks, resulting in an overall 4.2% increase in accuracy compared to the previous benchmark.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Irreversible Actions in Assistance Games with a Dynamic Goal</title>
<link href="https://hdl.handle.net/1721.1/156753" rel="alternate"/>
<author>
<name>Mayer, Hendrik T.</name>
</author>
<id>https://hdl.handle.net/1721.1/156753</id>
<updated>2024-09-17T03:01:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Irreversible Actions in Assistance Games with a Dynamic Goal
Mayer, Hendrik T.
Reinforcement Learning (RL) agents optimize reward functions to learn desirable policies in a variety of important real-world applications such as self-driving cars and recommender systems. However, in practice, it can be very difficult to specify the correct reward function for a complex problem, in what is known as reward misspecifcation. Impact measures provide metrics to determine how robust a particular agent’s behavior is to reward misspecification. This thesis analyzes one particular impact measure: the frequency of irreversible actions that an agent takes. We study this impact measure using a time-varying model of the principal’s preferences. This choice was motivated by two primary considerations. First, many real-world scenarios consist of a principal with time-varying preferences. Second, an agent assuming time-varying preferences may be more averse to performing irreversible actions. In this thesis, we examine principal-agent (human-robot) assistance games in toy grid environments inspired by cooperative inverse reinforcement learning [1], where irreversible actions correspond to removing transitions from a POMDP. In these games, we focus on how the frequency of changes in the principal’s preferences and the optimality of the principal influence the agent’s willingness to take irreversible actions. In 2-node and 4-node assistance games, we find two main results. First, in the presence of a random or approximately optimal human, the robot performs more irreversible actions as the goal state changes position more often. Second, in the presence of an optimal human, the robot rarely performs irreversible actions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ExoBLAS: Meta-Programming a High-Performance BLAS via Scheduling Automations</title>
<link href="https://hdl.handle.net/1721.1/156752" rel="alternate"/>
<author>
<name>Droubi, Samir</name>
</author>
<id>https://hdl.handle.net/1721.1/156752</id>
<updated>2024-09-17T03:56:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ExoBLAS: Meta-Programming a High-Performance BLAS via Scheduling Automations
Droubi, Samir
Kernel libraries are designed to support numerical computations and provide efficient implementations of them. The goal of these libraries is to provide many optimized functionalities, which is a challenge when the implementations of those programs are often written in C or assembly. BLAS (Basic Linear Algebra Subprograms) is a famous example of such libraries where the dimensionality of the interface imposes a huge space of functions to implement, which makes it particularly challenging to support. Our work tackles the problem of implementing BLAS in the context of meta-programming, particularly user-scheduling in the Exo programming language. We base our solution on three key ideas to achieve reuse at the level of the meta-program. First, there are similarities in the individual optimizations that are performed on these kernels, which we capture as scheduling operations with which we extend the Exo programming language. Secondly, the end-to-end optimization strategies (or schedules) for groups of these kernels are the same, and we capture them as scheduling automations. Lastly, more complex BLAS operations from higher levels can be transformed into less complex BLAS-like operations similar to operations from lower levels, so we can use the automation of a lower level to build the automation of a higher level. We evaluated our results against industry and open source implementations of BLAS and show that we achieve competitive performance with a small implementation in terms of lines of code.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Memorization: Exploring the Dynamics of Grokking in Sparse Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/156751" rel="alternate"/>
<author>
<name>Fuangkawinsombut, Siwakorn</name>
</author>
<id>https://hdl.handle.net/1721.1/156751</id>
<updated>2024-09-17T03:49:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Memorization: Exploring the Dynamics of Grokking in Sparse Neural Networks
Fuangkawinsombut, Siwakorn
In the domain of machine learning, "grokking" is a phenomenon where neural network models demonstrate a sudden improvement in generalization, distinct from traditional learning phases, long after the initial training appears complete. This behavior was first identified by Power et al. (2022) [5]. This thesis explores grokking within the context of the (&#119899;, &#119896;)-parity problem, aiming to uncover the mechanisms that trigger such transitions. Through extensive empirical research, we examine how different neural network configurations and training conditions influence the onset of grokking. Our methodology integrates advanced visualization techniques, such as t-SNE, and kernel density estimations to track the evolution from memorization to generalization phases. Furthermore, we investigate the roles of weight decay and network robustness against outliers, focusing on optimizing neural network architectures to achieve effective generalization with fewer computational resources. This study advances our understanding of grokking and proposes practical strategies for designing more efficient neural networks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Accountability Mechanisms in the Judiciary System using Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156750" rel="alternate"/>
<author>
<name>Shastri, Ishana</name>
</author>
<id>https://hdl.handle.net/1721.1/156750</id>
<updated>2024-09-17T03:02:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automating Accountability Mechanisms in the Judiciary System using Large Language Models
Shastri, Ishana
Holding the judicial system accountable often demands extensive effort from auditors who must meticulously sift through numerous disorganized legal case files to detect patterns of bias and systemic errors. For example, the high-profile investigation into the Curtis Flowers case took nine reporters a full year to assemble evidence about the prosecutor’s history of selecting racially-biased juries. Large Language Models (LLMs) have the potential to automate and scale these accountability pipelines, especially given their demonstrated capabilities in both structured and unstructured document retrieval tasks. We present the first work elaborating on the opportunities and challenges of using LLMs to provide accountability in two legal domains: bias in jury selection for criminal trials and housing eviction cases. We find that while LLMs are well-suited for information extraction from eviction forms that have more structure, court transcripts present a unique challenge due to disfluencies in transcribed speech.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Topology for Capacitively Isolated Switched Capacitor Converter</title>
<link href="https://hdl.handle.net/1721.1/156749" rel="alternate"/>
<author>
<name>Jerez, Raiphy</name>
</author>
<id>https://hdl.handle.net/1721.1/156749</id>
<updated>2024-09-17T03:02:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Novel Topology for Capacitively Isolated Switched Capacitor Converter
Jerez, Raiphy
This thesis introduces a novel topology for capacitive isolation in switched-capacitor DCDC converters, taking inspiration from previous work1. The research endeavors to develop a unique switched-capacitor topology that enables isolation between input and output voltages. By integrating elements of the Cockcroft-Walton generator into the Dickson converter framework, the proposed design seeks to leverage the inherent advantages of switched-capacitor converters—such as compactness, lightweight design, and higher efficiency at low to moderate power levels—over traditional magnetic converters. Additionally, the incorporation of isolation in the switched-capacitor converter architecture offers enhanced flexibility, allowing for selective power processing and more precise regulation. This feature is particularly beneficial in applications requiring dynamic power management and improved efficiency in power conversion.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanistic Interpretability for Progress Towards Quantitative AI Safety</title>
<link href="https://hdl.handle.net/1721.1/156748" rel="alternate"/>
<author>
<name>Lad, Vedang K.</name>
</author>
<id>https://hdl.handle.net/1721.1/156748</id>
<updated>2024-09-17T03:32:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mechanistic Interpretability for Progress Towards Quantitative AI Safety
Lad, Vedang K.
In this thesis, we conduct a detailed investigation into the dynamics of neural networks, focusing on two key areas: inference stages in large language models (LLMs) and novel program synthesis methods using mechanistic interpretability. We explore the robustness of LLMs through layer-level interventions such as zero-ablations and layer swapping, revealing that these models maintain high accuracy despite perturbations. As a result, we hypothesize the stages of inference in LLMs. This work suggests implications for LLM dataset curation, model optimization, and quantization. Subsequently, we introduce MIPS, an innovative method for program synthesis that distills the operational logic of neural networks into executable Python code. By transforming an RNN into a finite state machine and applying symbolic regression, MIPS successfully addresses 32 out of 62 algorithmic tasks, outperforming GPT-4 in 13 unique challenges. The work intends to take a step forward in enhancing the interpretability and reliability of AI systems, promising significant advances in our understanding and utilization of current and future AI capabilities. Together, these studies highlight the importance of comprehending the inferential behaviors of neural networks to foster more interpretable and efficient AI.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Steerable Alignment with Conditional Multiobjective Preference Optimization</title>
<link href="https://hdl.handle.net/1721.1/156747" rel="alternate"/>
<author>
<name>Manyika, Julian</name>
</author>
<id>https://hdl.handle.net/1721.1/156747</id>
<updated>2024-09-17T03:03:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Steerable Alignment with Conditional Multiobjective Preference Optimization
Manyika, Julian
As the scale, capabilities and use-cases of large language models (LLMs) continue to grow, it is imperative that these systems are aligned with human preferences. Current state of the art strategies for alignment such as Reinforcement Learning from Human Feedback (RLHF) have provided useful paradigms for finetuning LLMs to produce outputs that are more consistent with human preferences. These approaches, however, assume that preferences are formed by a single, underlying reward model, which is likely insufficient for representing an individual’s preferences, certainly unable to represent diverse group preferences, and inf lexible for users at inference time. To address these limitations, we propose Conditional Multiobjective Preference Optimization (CMPO), a novel alignment strategy that trains a user-steerable LLM along multiple attributes of text, such as helpfulness and humor. CMPO simulates the pareto front of multiple single-attribute preference-optimized models through structural plurality and finetuning with Direct Preference Optimzation (DPO), and allows users to condition outputs on the predefined attributes at inference-time. Experiments show that CMPO generates responses that are preferred to those from separate attribute-specific DPO models and from models trained using SteerLM, a alternate model steering approach. CMPO empirically shows promise as a scalable and flexible finetuning strategy for creating LLMs that are attribute-steerable from parameterized preferences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Verifying Hardware Security Modules With True Random NumberGenerators</title>
<link href="https://hdl.handle.net/1721.1/156746" rel="alternate"/>
<author>
<name>Zhao, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/156746</id>
<updated>2024-09-17T03:54:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Verifying Hardware Security Modules With True Random NumberGenerators
Zhao, Katherine
Hardware security modules (HSMs) are powerful tools in building secure computer systems, allowing developers to factor out security-critical code to separate devices. Because HSMs usually work with sensitive data, it is crucial that we are able to verify that they are secure. Many HSMs today also include true random number generators (TRNGs) as part of their architecture to seed cryptographic functions for generating keys, creating nonces, padding, and more. This thesis presents a definition of Information-Preserving Refinement with Randomness (IPRR) that captures the idea that a HSM with a TRNG is correct and is secure from timing side channel attacks. We additionally construct a strategy to prove IPRR, and develop Karatroc, a tool for verifying that a HSM satisfies IPRR. Through the creation and evaluation of Karatroc, we demonstrate the ability to verify HSMs with TRNGs without incurring significant added cost in performance and proof length as compared to existing proof methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Standardization of Electronic Component Datasheets to Improve Systematic Data Extraction</title>
<link href="https://hdl.handle.net/1721.1/156745" rel="alternate"/>
<author>
<name>Gustafson, Nicholas F.</name>
</author>
<id>https://hdl.handle.net/1721.1/156745</id>
<updated>2024-09-17T04:09:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Standardization of Electronic Component Datasheets to Improve Systematic Data Extraction
Gustafson, Nicholas F.
This thesis addresses the challenge of standardizing electronic component datasheets to improve systematic data extraction. The absence of uniformity in datasheet design complicates the process of systematically extracting critical information, leading to significant manual effort and potential errors. This research explores the current state of datasheet standardization and examines existing systematic data extraction efforts from semi-structured documents. It highlights the limitations of current methods and emphasizes the need for further standardization to facilitate accurate and efficient data extraction. The thesis proposes a detailed methodology for transitioning electronic component datasheets from semistructured to structured formats through standardization. By defining common standards and specific structures for different types of datasheets, this approach aims to enhance both human readability and machine processing. The thesis concludes by discussing the broader implications of these standards and their potential applications in other fields. Through this work, the goal is to streamline the datasheet creation process, reduce manual intervention, and ultimately improve the accuracy and efficiency of systematic data extraction in the electronic components industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hybrid Switched-Capacitor Converter for Capacitive Wireless Power Transfer in Biomedical Applications</title>
<link href="https://hdl.handle.net/1721.1/156744" rel="alternate"/>
<author>
<name>Sund, Jade</name>
</author>
<id>https://hdl.handle.net/1721.1/156744</id>
<updated>2024-09-17T03:40:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Hybrid Switched-Capacitor Converter for Capacitive Wireless Power Transfer in Biomedical Applications
Sund, Jade
On market rechargeable pulse generators, use inductive wireless power transfer (I-WPT), but capacitive wireless power transfer (C-WPT) has the potential to provide safety and size improvements over I-WPT. Current C-WPT research is focused on resonant capacitive coupling methods. Such works have reported power transfer efficiency of less than 40%. In the proposed thesis, a capacitively isolated Dickson converter, a type of hybrid switched capacitor converter, will be investigated to determine if it can be used to safely, efficiently, and in a small package deliver power to biomedical implants.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing 3D Scene Graph Generation with Multimodal Embeddings</title>
<link href="https://hdl.handle.net/1721.1/156743" rel="alternate"/>
<author>
<name>Morales, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/156743</id>
<updated>2024-09-17T03:58:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing 3D Scene Graph Generation with Multimodal Embeddings
Morales, Joseph
3D Scene Graphs are expressive map representations for scene understanding in robotics and computer vision. Current approaches for automated zero-shot 3D Scene Graph generation rely on spatial ontologies that relate objects with the semantic locations they are found in (e.g., a fork is found in a kitchen). While conferring impressive zero-shot performance, these approaches are conditioned on the existence of disambiguating objects in a scene, the expressiveness of the generated spatial ontologies, and knowing during data collection that a robot needs to observe specific objects in the environment. This thesis proposes a method for zero-shot scene graph generation by leveraging Vision-Language Models (VLMs) to construct a layer of Viewpoints in the scene graph, which allow for after-the-fact open-vocabulary querying over the scene. Methods for utilizing different VLM features are explored, which result in improvement over the ontological approach on region segmentation tasks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inductive Biases in Learning Hierarchical Abstractions for Bipedal Locomotion</title>
<link href="https://hdl.handle.net/1721.1/156742" rel="alternate"/>
<author>
<name>Ravichandar, Sanjna</name>
</author>
<id>https://hdl.handle.net/1721.1/156742</id>
<updated>2024-09-17T03:37:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inductive Biases in Learning Hierarchical Abstractions for Bipedal Locomotion
Ravichandar, Sanjna
Bipedal locomotion presents a complex challenge in the field of reinforcement learning (RL), due to the high dimensional state and action space. Hierarchical abstractions and inductive biases emerge as critical components in navigating this complexity, offering pathways for effective learning and adaptation in bipedal locomotion tasks. By leveraging hierarchical structures and inductive biases, RL controllers can distill the inherent complexity of bipedal locomotion into manageable components, facilitating more efficient learning and adaptation processes. This work explores hierarchical abstractions within the context of RL for bipedal locomotion. We investigate three distinct RL locomotion controllers: a baseline controller, an action space abstraction controller, and a novel Hierarchical RL (HRL) controller implemented on velocity tracking tasks. We assess the controllers across various RL metrics, including task performance, learning efficiency, stability, and human-likeness metrics derived from human locomotion studies. We quantify the effectiveness of hierarchical abstractions and inductive biases in enhancing locomotion task performance and aligning RL-generated behaviors with human locomotion patterns. The action space abstraction controller emerges with superior performance, and our investigation underscores the potential of HRL approaches, indicative of its ability to leverage hierarchical structures for optimized locomotion behaviors and highlights the importance of selecting appropriate and well-designed abstractions. By analyzing the role of hierarchical abstractions and inductive biases in bipedal RL, our study contributes to advancing the understanding and development of RL algorithms for bipedal locomotion, with implications for the design of more efficient and human-like locomotion behaviors in robotic systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Twofish: Automatic Edit Cascading for Diagrams</title>
<link href="https://hdl.handle.net/1721.1/156741" rel="alternate"/>
<author>
<name>Huang, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/156741</id>
<updated>2024-09-17T03:38:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Twofish: Automatic Edit Cascading for Diagrams
Huang, Grace
Creating and editing diagrams, whether for scientific research, education, or otherwise, is tedious and time consuming. When a user makes a small change to a diagram element, they often have to make additional downstream edits to fully propagate the change to the diagram. This is because these relative positioning constraints are often defined through layout commands, such as alignment, which are viewed by many direct manipulation editors as one-time operations. That is, a layout command enforces spatial relationships between objects by mutating them but does not enforce these relationships when the user makes later edits. While viewing these commands as one-time operations improves the editing flexibility of the editor, it makes editing less efficient. To balance the tradeoff between editing flexibility and efficiency, we present Twofish, a graphical editor that persists relations between elements. In this context, relations, such as alignment or an arrow, associate elements with each other by defining relative spacing constraints between them. Through persisting these relations, we can reapply them automatically to the diagram when corresponding elements are edited. This allows Twofish to automatically cascade edits downstream to fix any positioning constraints that were broken because of a change. This system is built as an extension of an existing graphical editor. In doing so, Twofish makes it easier to create and edit diagrams without sacrificing expressibility. To evaluate Twofish, we compared using Twofish and Figma to edit diagrams in six different scenarios, using three example diagrams. From this comparison, we found that Twofish generally improved editing efficiency but had worse editing flexibility than Figma.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Grit in MLB Batters</title>
<link href="https://hdl.handle.net/1721.1/156740" rel="alternate"/>
<author>
<name>Yang, Angel</name>
</author>
<id>https://hdl.handle.net/1721.1/156740</id>
<updated>2024-09-17T03:42:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantifying Grit in MLB Batters
Yang, Angel
This thesis investigates the quantification of grit in Major League Baseball (MLB) batters, a crucial yet underexplored area in sports analytics traditionally gauged through qualitative assessment. Utilizing 2023 game data from the top 160 most utilized MLB batters, this study develops a Grit Score for each player based on the number of at-bats required to return to average performance after a period of below-average performance. At-bat performance is measured through Delta Runs Expected, and the at-bat group size of the window is selected by testing for correlation and consistency in player grit rankings. Results reveal significant variations in Grit Scores among batters; players identified as the most gritty generally correspond to those with top offensive performance, though grit and performance do not perfectly correlate. Furthermore, gritty batters tend to experience a higher number of hitting slumps but with shorter average lengths, regardless of the at-bat group size used to define the performance window. This research has implications in player valuation and development, team management, and scouting and drafting, suggesting that MLB teams should favor players who recover quickly from poor at-bats due to their more consistent performance and reliable offensive contributions to team success.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Informing decision-making in single-objective, mixed-variable design problems</title>
<link href="https://hdl.handle.net/1721.1/156739" rel="alternate"/>
<author>
<name>Fang, Demi L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156739</id>
<updated>2024-09-17T04:04:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Informing decision-making in single-objective, mixed-variable design problems
Fang, Demi L.
Data-driven decision-making in mixed-variable design problems presents a variety of challenges and opportunities, especially in the increasingly data-rich field of emissions in architectural and structural design. Designers can benefit from an underlying knowledge about, for example, whether material choice (discrete) or span (continuous) have more important consequences on structural emissions. This intuition need not be built purely through experience nor optimization: data-driven approaches can offer quantitative feedback. However, traditional approaches of sensitivity analysis are limited to continuous variables, while certain types of machine learning models can handle combinations of continuous and discrete variables. In this thesis, a hybrid gradient-based, sampling-based technique determining the directional importance of mixed variables in a design space is benchmarked against state-of-the-art variable importance methods (also known as feature importance or interpretability) from machine learning. The importance evaluations and runtimes are compared across workflows.  First, a concise literature review is presented, clarifying and unifying terminology across fields. Tree-based models are identified as a machine learning model that readily handles mixed-variable design spaces, and the following variable importance metrics are identified: impurity-based importance metrics (also known as Mean Decrease Impurity), permutation feature importance (PFI, also known as Mean Decrease Accuracy), and Shapley values. These existing workflows are applied to varying sample sizes of three different datasets related to the application to low-carbon structural design. The same samples are evaluated using the hybrid technique previously proposed by the author, which trains the data on a conditional variational autoencoder (cVAE), approximates gradients on the model, and summarizes gradients into “influence metrics” using a Gaussian mixture model (GMM) (in contrast to a mean absolute value).  Through this comparison, this thesis establishes several findings, including several advantages to using the hybrid cVAE and GMM-to-influence workflow over typical tree-based feature importance approaches. First, the hybrid method’s evaluation of gradients is consistently faster than the evaluation of importance in all other workflows for all sample sizes and datasets. Secondly, it avoids the known drawback of tree-based models’ tendency to assign higher importance to high-cardinality variables. Third, its definition of performance “gradients” with respect to each category (as opposed to each categorical variable) offers more specific, useful insights. For example, it is more useful to know which structural framing system is associated with large reductions in emissions (gradients by category) than to know that the choice of structural framing system is associated with a range of reductions and increases in emissions (gradients by categorical variable, which is typical in feature importance methods). These advantages come at the expense of more time (in this case, 10-fold) needed to train the model compared to state-of-the-art gradient-boosted tree models and the additional time needed to fit a GMM (as opposed to taking the mean absolute value of importance values across the sample). The hybrid workflow is still 2 to 10 times faster than the random forest workflows. Finally, these comparisons highlight the importance of cardinality of categorical variables in mixed variable design spaces, both in the process of selecting a model and selecting an importance evaluation method. &#13;
Key words: variable importance, feature importance, mixed-variable design spaces, gradients, design space exploration, data-driven decision-making
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hybrid Approach for Key-Value Extraction from Technical Specification Documents</title>
<link href="https://hdl.handle.net/1721.1/156738" rel="alternate"/>
<author>
<name>Lee, Samuel S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156738</id>
<updated>2024-09-17T03:43:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Hybrid Approach for Key-Value Extraction from Technical Specification Documents
Lee, Samuel S.
As the number of documents processed by businesses across the world increases daily, the demand for streamlined and automated document processing methods grows. However, commercial methods for information extraction from documents do not generalize well across different document formats, as each solution is tailored to specific types of documents. This thesis provides an overview of a hybrid document processing pipeline designed to extract key-value pairs from technical specification documents with high accuracy. Two different phases of the pipeline are introduced, both employing rule-based methods and machine learning to cover a variety of document types. The first is an earlier iteration that extracts information from a simpler collection of documents, and the second is the current iteration designed to handle a much larger dataset containing more complex documents. Lastly, the initial stages of a module designed for key-value extraction from a specific type of technical specification document is also proposed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Post-Quantum Verifiable Oblivious Pseudorandom Functions</title>
<link href="https://hdl.handle.net/1721.1/156650" rel="alternate"/>
<author>
<name>Propson, Helen</name>
</author>
<id>https://hdl.handle.net/1721.1/156650</id>
<updated>2024-09-04T03:41:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Post-Quantum Verifiable Oblivious Pseudorandom Functions
Propson, Helen
This work presents the construction of a post-quantum verifiable oblivious pseudorandom function (VOPRF) with a focus on efficiency and practicality. Leveraging lattice-based cryptographic primitives, particularly the Learning With Errors (LWE) problem, our VOPRF construction aims to address the limitations of existing approaches by reducing proof sizes. The key component in our work is the integration of an efficient zero-knowledge proof of knowledge (ZKPoK) protocol. This ZKPoK is notably more efficient than the proof systems used in prior VOPRF constructions, ensuring the verifiability of PRF outputs while providing smaller proof sizes. Our construction relies on the hardness of the ring-LWE and short integer solution (SIS) problems, and we demonstrate its security in the random oracle model. Overall, our VOPRF construction represents a step towards the development of more practical post-quantum secure cryptographic protocols, highlighting the potential for further improvements in efficiency and real-world applicability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robot Planning in Uncertain, Dynamic Environments</title>
<link href="https://hdl.handle.net/1721.1/156644" rel="alternate"/>
<author>
<name>Cheerla, Anika</name>
</author>
<id>https://hdl.handle.net/1721.1/156644</id>
<updated>2024-09-04T03:08:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Robot Planning in Uncertain, Dynamic Environments
Cheerla, Anika
Many real-world applications require robots to operate in dynamic environments characterized by moving objects or agents whose trajectories are unpredictable. This thesis addresses the challenges posed by such environments by introducing Relative Temporal Probabilistic Roadmaps (Rel-T-PRM), a novel motion planning algorithm that builds upon the Temporal Probabilistic Roadmap (T-PRM) algorithm. The Rel-T-PRM allows for variable dynamic obstacle size, enables robustness with respect to minor changes in time and position and and introduces the concept of waiting until obstacles clear. Furthermore, we leverage Rel-T-PRM’s strengths to propose two replanning strategies. The first attempts to rapidly replan on-the-fly by using waiting to modify the trajectory without needing to modify the path. The second proposed replanning strategy identifies and plans to safe locations, where the robot can safely replan under a longer time horizon. We demonstrate Rel-T-PRM through a variety of simulation experiments on a fixed-base robotic manipulator.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ScaleGPS: Scalable Graph Parallel Sampling via Data-centric Performance Engineering</title>
<link href="https://hdl.handle.net/1721.1/156640" rel="alternate"/>
<author>
<name>Cai, Miranda J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156640</id>
<updated>2024-09-04T03:42:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ScaleGPS: Scalable Graph Parallel Sampling via Data-centric Performance Engineering
Cai, Miranda J.
Graph sampling extracts representative samples of a graph, so that approximate graph algorithms can be used in place of expensive, exact algorithms while still achieving highquality results. Thus, graph sampling plays an important role in many modern graph-based applications, such as graph machine learning and graph data mining. However, because of unstructured sparsity in the graph data and the randomness in the sampling algorithms, graph sampling often is the computational bottleneck. To accelerate it, there exist parallel graph sampling methods on multicore CPUs or GPUs. However, limitations arise at both sides. Due to lower throughput, CPU implementations are much slower than GPU ones, while GPU memory capacity is limited to only being able to handle small input graphs. We present the idea behind a scalable graph sampling framework, ScaleGPS, to support high performance graph sampling on huge graphs in a single machine with a CPU and a GPU. The key idea is to cooperatively employ data caching and compression to reduce memory footprint and data movement overhead, and thus achieve high performance and scalability. The challenge in applying caching and compression for graph sampling is two-fold. First, the randomness in sampling leads to redundant computation and memory accesses, and thus low work efficiency. Second, real-world graphs often exhibit skewed degree distribution, where a f ixed strategy cannot optimally handle various cases. We propose a hybrid and adaptive strategy to address this challenge. First, we split the vertices in the graph into two groups based on their degrees. For each group, we store the neighbor lists in different formats, to make full use of the scarce GPU memory resources. Based on this hybrid compression method, we use the GPU memory as a cache of the CPU memory, and adaptively cache hot data to minimize the data movement overhead between the CPU and GPU. We implement our strategy in ScaleGPS and evaluate it on a single machine with a 48-core CPU and an A100 GPU. Our experimental results on various sampling algorithms show that ScaleGPS is able to support billion-edge graphs (up to 84-billion) in a single machine. While the performance benefits over these large graphs are still undetermined, ScaleGPS achieves an average of 33.4× (up to 93×) speedups for smaller graphs over state-of-the-art parallel CPU implementations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MashupMuse: A Web Application for Easier Music Mashup Creation</title>
<link href="https://hdl.handle.net/1721.1/156639" rel="alternate"/>
<author>
<name>Meng, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/156639</id>
<updated>2024-09-04T03:26:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">MashupMuse: A Web Application for Easier Music Mashup Creation
Meng, Julie
The intersection of music and technology enables a form of musical expression known as a music mashup—a creative work that combines elements from multiple existing songs into a new, cohesive piece. The traditional process for creating a mashup with standard music editing software can be time-consuming for experienced mashup creators and intimidating for new creators. This software has a steep learning curve and more functionality than required for mashup enthusiasts. Over the last fifteen years, researchers have attempted to simplify this process through solutions with user-friendly interfaces for streamlined mashup creation. With the rise of artificial intelligence, some recent tools automate the mashup process entirely, which strips users of creative control and potentially leads to musically unsatisfying results. Current mashup software falls short either in functionality or userfriendliness, leaving a need for a platform that balances technological assistance and creative freedom. In response to this need, we propose MashupMuse, a web application that simplifies music mashup creation by automating certain parts of the mashup creation process, while simultaneously leaving room for creative freedom. MashupMuse separates each song’s audio into individual tracks, such as vocals, bass, and drums. It allows users to select sections from these tracks and arrange them on a master track while automatically handling beat and key adjustments. This balance of automation and creative freedom offers users a streamlined yet flexible music editing experience. During user testing, we found notable advantages in comparison with a similar mashup creation application. Finally, we outline future work to further improve the user experience.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motion-Compensated Viewpoint Shift</title>
<link href="https://hdl.handle.net/1721.1/156638" rel="alternate"/>
<author>
<name>Tao, Julius L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156638</id>
<updated>2024-09-04T03:51:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Motion-Compensated Viewpoint Shift
Tao, Julius L.
Eye contact is an essential social cue that conveys our attention to others but is difficult to maintain during video calls. Many existing methods to synthesize a gaze-corrected view involve estimating a 3D face model and projecting it into the desired camera view, which is too computationally expensive for most personal computers. By drawing inspiration from 2D methods of video frame interpolation, we wish to not only correct eye gaze but also better align the face towards the camera without this expensive 3D modeling. Our findings suggest that adding a second webcam opposite the first and interpolating between the two outer camera views can give realistic, gaze-aligned center views. We conclude that the prevailing approach of 3D modeling is surprisingly not necessary for gaze correction. Not only do 2D techniques suffice, but their synthesized frames can appear more natural than prior results. We believe that this work is a crucial step towards true-to-life viewpoint shift for live video conferences.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Representation Learning Associates Patients’ Risks for Metabolic Diseases with Features of Their Lipocytes</title>
<link href="https://hdl.handle.net/1721.1/156626" rel="alternate"/>
<author>
<name>Tan, Zipei</name>
</author>
<id>https://hdl.handle.net/1721.1/156626</id>
<updated>2024-09-04T03:07:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Representation Learning Associates Patients’ Risks for Metabolic Diseases with Features of Their Lipocytes
Tan, Zipei
Polygenic risk scores (PRS) estimate an individual’s risk of developing a certain disease, suggesting that differences between cells of individuals with high versus low PRS could give us insight into the cellular disease mechanisms. To study metabolic diseases, we analyze the distribution of cell states of lipocytes of individuals with different PRS for metabolic diseases, thereby associating individual-level genotypes with cell-level features. To accomplish this, we make use of a recent large-scale lipocyte microscopy imaging dataset. By learning a representation of multi-channel lipocyte microscopy images using a convolutional autoencoder, we perform unsupervised clustering on the learnt representations to identify different cell states. We analyze the distribution of these cell states in different individuals and associate their PRS to the observed cell state distributions. Finally, we show that it is possible to generate counterfactual lipocyte images and understand the effect of increased or reduced PRS on cell states through transforming the learnt representations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Photocurrent Spectroscopy Study of Graphene / Hexagonal Boron Nitride Moiré Superlattice In the Far-Infrared Regime</title>
<link href="https://hdl.handle.net/1721.1/156624" rel="alternate"/>
<author>
<name>Yang, Jixiang</name>
</author>
<id>https://hdl.handle.net/1721.1/156624</id>
<updated>2024-09-04T03:31:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Photocurrent Spectroscopy Study of Graphene / Hexagonal Boron Nitride Moiré Superlattice In the Far-Infrared Regime
Yang, Jixiang
Two-dimensional (2D) materials and their heterostructures, especially those with moiré superlattices, have been one of the most fascinating topics in physics in recent years. Many interesting physics, for example the correlated insulating state at half- or quarter- fillings of the moiré band, happened in the far-infrared energy range. However, there are very few optical spectroscopic studies of these 2D materials due to many intrinsic limitations. In this thesis, I will introduce a method named Fourier-transform infrared (FTIR) photocurrent spectroscopy. I will discuss the advantage of this method, and why it is suitable for far-infrared studies of 2D materials. Then I will apply it to the monolayer graphene / hexagonal boron nitride (hBN) moiré superlattices, where I accurately measure the gap ∆ opened at charge-neutral point (CNP) by the moiré superlattice. The relationship between the gap size and the moiré wavelength will also be discussed. Finally, I will discuss the possibility of applying this technique to other novel physics phenomena and other 2D systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Liquid-Crystal-on-Silicon Photonic Integrated Circuits with Millions of Degrees of Freedom</title>
<link href="https://hdl.handle.net/1721.1/156622" rel="alternate"/>
<author>
<name>Wang, Archer</name>
</author>
<id>https://hdl.handle.net/1721.1/156622</id>
<updated>2024-09-04T03:55:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Programmable Liquid-Crystal-on-Silicon Photonic Integrated Circuits with Millions of Degrees of Freedom
Wang, Archer
This thesis proposes a novel approach to photonics, wherein waveguides are formed entirely within a homogeneous liquid crystal layer using Liquid-Crystal-on-Silicon (LCoS) technology. Utilizing the electro-optical properties of LCs, we demonstrate the theoretical feasibility of inducing refractive index variations solely within the LC medium to guide light. This method diverges from traditional waveguiding techniques that rely on solid core and cladding structures, offering a new paradigm in reconfigurable photonic devices. Additionally, we develop and explore the idea of a programmable Multi-Mode Interferometer using LCoS technology, enabling the performance of arbitrary unitary transformations. Future work will focus on developing robust simulations of coupled-mode theory with liquid crystals, paving the way for next-generation photonic technologies that perform universal linear optics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Tropospheric Hydrogen Peroxide Trends from 1950-2014</title>
<link href="https://hdl.handle.net/1721.1/156613" rel="alternate"/>
<author>
<name>Sun, Vanessa</name>
</author>
<id>https://hdl.handle.net/1721.1/156613</id>
<updated>2024-09-04T03:17:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating Tropospheric Hydrogen Peroxide Trends from 1950-2014
Sun, Vanessa
The oxidizing capacity of the atmosphere, or the ability of the atmosphere to clean itself of the pollutants that build up in the troposphere, is determined by oxidants including ozone (O₃), HOx radicals (OH and HO₂) and hydrogen peroxide (H₂O₂). O₃ is the primary source for HOx radicals, while H₂O₂ is a key sink for HOx radicals that terminates the rapid cycling between OH and HO₂. The concentrations of the HOx radicals and H₂O₂ are difficult to measure directly, with scarce long term data of H₂O₂ primarily available through ice core records. Given the lack of observational data, much of our knowledge on the history of tropospheric oxidants relies on modeling studies. We quantify the global H₂O₂ burden and trends between 1950 and 2014 from the Community Earth System Model - Whole Atmosphere Community Climate Model version 6 (CESM2-WACCM6). This is a global chemistry-climate model, with each of the 13 ensemble members simulating the historical period. Each has a miniscule difference in their initial conditions, and subsequently yield different responses when reacting to the same external forcing. In this study, we discern where H₂O₂ is increasing in the troposphere, particularly in the Southern Hemisphere and over Antarctica. We quantify a rate of increase for the H₂O₂ annual burden, noting the rise beginning in the 1970's and growing from 14% in the 1970's to 34% in the 2000's, with respect to the burden in the 1950's. We find that changes in globally averaged annual mean H₂O₂ are most strongly correlated with changes in ozone, whereas over Antarctica, the strongest relationships for H₂O₂ trends occur with ozone photolysis rates. This aligns well with previous ice core and modelling studies in the literature. Lastly, we also find evidence of stratospheric ozone depletion having no discernible impact on global H₂O₂ burden changes using an additional parallel set of simulations holding ozone depleting substances at 1950 levels.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Characterization of an Open-Source, High-Efficiency, Easily-Reconfigurable Switch-mode Current Driver for Magnetic Resonance Imaging Applications</title>
<link href="https://hdl.handle.net/1721.1/156607" rel="alternate"/>
<author>
<name>Govindarajan, Ishaan</name>
</author>
<id>https://hdl.handle.net/1721.1/156607</id>
<updated>2024-09-04T03:31:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Characterization of an Open-Source, High-Efficiency, Easily-Reconfigurable Switch-mode Current Driver for Magnetic Resonance Imaging Applications
Govindarajan, Ishaan
B₀ shimming and local B₀ field control (will be collectively referred to as “local field control") is a process employed by current and proposed Magnetic Resonance Imaging (MRI) techniques to yield faster and more detailed scans with greater diagnostic utility. Additional scanner hardware, specifically local field control coils and power electronic circuits to drive current into these coils (referred to as “current drivers") are required for these techniques. While current driver designs exist today, they typically trade off efficiency and imaging noise. This work demonstrated a proof-of-concept switch-mode current driver with heatsink-free 10ADC drive capability, &lt;25µsec step response rise-times with multiple loads, acceptable disturbance rejection, all while maintaining comparable imaging quality to that of a linear driver. Design and source files for the driver are released under open source licenses. Further performance areas of improvement have been identified, and work will continue to develop this proof-of-concept device into one with greater research and clinical utility.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometric Deep Learning for Biomolecules</title>
<link href="https://hdl.handle.net/1721.1/156606" rel="alternate"/>
<author>
<name>Mitnikov, Ilan</name>
</author>
<id>https://hdl.handle.net/1721.1/156606</id>
<updated>2024-09-04T03:56:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Geometric Deep Learning for Biomolecules
Mitnikov, Ilan
Recent advancements in machine learning offer a promising pathway to deeper insights into biological phenomena. This manuscript explores the integration of geometric deep learning techniques to model biological structures. By embedding inductive biases based on geometry and physical laws, we aim to enhance our understanding and predictive capabilities in biomolecular systems. We present methods using equivariant neural networks for geometrical protein representation learning, molecular representation learning for electron density prediction, and scalable molecular dynamics simulations using stochastic interpolants.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Enhanced Signal Processing Toolbox for Electrical Energy Monitoring</title>
<link href="https://hdl.handle.net/1721.1/156601" rel="alternate"/>
<author>
<name>Langham, Aaron William</name>
</author>
<id>https://hdl.handle.net/1721.1/156601</id>
<updated>2024-09-04T03:57:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Enhanced Signal Processing Toolbox for Electrical Energy Monitoring
Langham, Aaron William
A nonintrusive load monitor (NILM) aims to perform power system analysis with a minimally invasive sensor profile. A wealth of literature exists for load identification and energy disaggregation under ideal, healthy conditions. However, a significant value proposition of nonintrusive load monitoring comes from fault detection and diagnostics. Early detection of electromechanical faults aids safety, reduces energy waste, and saves money. However, load identification and energy disaggregation are complicated by faulty or time-varying load operation profiles. This thesis extends previous thesis work by the author that addresses this issue. A new, “multistream” feature extraction approach to nonintrusive power monitoring is presented. This approach enables targeted electrical data analysis on non-stationary electrical systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Generative Agent Social Dilemmas</title>
<link href="https://hdl.handle.net/1721.1/156591" rel="alternate"/>
<author>
<name>Yocum, Julian R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156591</id>
<updated>2024-09-04T03:52:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mitigating Generative Agent Social Dilemmas
Yocum, Julian R.
In social dilemmas, individuals would be better off cooperating but fail to do so due to conflicting interests that discourage cooperation. Existing work on social dilemmas in AI has focused on standard agent design paradigms, most recently in the context of multi-agent reinforcement learning (MARL). However, with the rise of large language models (LLMs), a new design paradigm for AI systems has started to emerge—generative agents, in which actions performed by agents are chosen by prompting LLMs. This paradigm has seen recent success, such as Voyager, a highly capable Minecraft agent. In this work, we perform an initial study of outcomes that arise when deploying generative agents in social dilemmas. To do this, we build a multi-agent Voyager framework with a contracting and judgement mechanism based on formal contracting, which has been effective in mitigating social dilemmas in MARL. Wethen construct social dilemmas in Minecraft as the testbed for our open-source¹ framework. Finally, we conduct preliminary experiments using our framework to provide evidence that contracting helps improve outcomes for generative agents in social dilemmas.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ECO-LENS Addressing Urban Biodiversity with Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/156590" rel="alternate"/>
<author>
<name>Montas, Enrique B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156590</id>
<updated>2024-09-04T03:03:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ECO-LENS Addressing Urban Biodiversity with Machine Learning
Montas, Enrique B.
The link between global climate change and biodiversity is well recognized. Humandriven destruction and degradation of ecosystems amplify the negative and complex impacts of climate change, increasing the strain on remaining ecosystems and wildlife. Therefore, it is essential for climate change mitigation efforts to include strategies that protect and conserve biodiversity, enhancing ecosystem productivity, resilience, adaptability, and sustainability. Identifying and prioritizing ecosystem functions that support key ecosystem services is crucial for targeted conservation actions, particularly in urban areas. Urban regions have doubled in size since 1992, and compared to 2020, they are expected to expand by 30% to 180% by 2100. Most of this growth will occur in the global south, in regions rich in biodiversity, and will impact global ecosystems through resource demands, pollution, and climate effects. Urban biodiversity management is an emerging discipline, with significant gaps in our understanding that are vital for improving biodiversity conservation policies and management in urban areas to support global biodiversity goals. As research on ecosystem services progresses, the importance of urban vegetation in promoting the sustainability of urban ecosystems and environments is increasingly recognized. Recently, remote sensing technology has become a valuable tool for obtaining detailed information and mapping urban vegetation, offering numerous benefits. Leveraging remote sensing tools in the form of satellite imagery and LiDAR enables extensive coverage of urban areas, providing an opportunity to evaluate biodiversity patterns across entire regions without causing disturbance to ecosystems. While remote sensing has significantly improved our capacity to monitor landscape-level biodiversity losses, its application for assessing urban biodiversity has been limited. This research paper offers several ways of leveraging remote sensing and machine learning techniques to close the existing data gap. Through this paper, we showcase the potential use of Normalized Difference Vegetation Index (NDVI), satellite imagery, and LiDAR point clouds to provide data for urban biodiversity assessment, management, and conservation. By leveraging technologies and the data they provide, urban planners, policymakers, and conservation practitioners can make more informed decisions to protect and enhance urban biodiversity systematically.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Contextual Annotation Framework for Short Linear Motifs in Proteins</title>
<link href="https://hdl.handle.net/1721.1/156589" rel="alternate"/>
<author>
<name>Nyiam, Nten P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156589</id>
<updated>2024-09-04T03:02:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Developing a Contextual Annotation Framework for Short Linear Motifs in Proteins
Nyiam, Nten P.
Identifying and validating short linear motifs (SLiMs) is challenging due to their low sequence complexity and high prevalence across the proteome. Many false positives—sequences that match the pattern of the SLiM but are not involved in the biological functions typically associated with SLiMs—complicate this task. Distinguishing functional SLiMs from false positives requires an approach that incorporates not just sequence analysis but also biological, structural, and evolutionary context. This thesis presents a framework designed to annotate candidate SLiM motifs and differentiate true binders from false positives. The proposed framework uses several annotation metrics, including sequence conservation, post-translational modifications (PTMs), structural context derived from AlphaFold model scores, and the proximity of neighboring motifs. We evaluate each of these metrics using a test dataset sampled from the Eukaryotic Linear Motif (ELM) protein database. Our results indicate that sequence conservation has a consistent but moderate ability to differentiate true binders from unverified candidate motifs. Additionally, integrating AlphaFold’s structural data may help reduce false positives arising from predictions of disordered regions when sampling the motif data. We show that the tool currently underestimates the number of PTMs, suggesting a need for integrating additional PTM databases or predictive tools to improve motif annotation accuracy. Finally, we find that known functional SLiMs tend to cluster more closely than potential false positives, indicating that spatial proximity may help identify true SLiMs in motifs that serve specific roles. These findings highlight the importance of a context-based approach in SLiM annotation and open routes for future research and development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Scene and Object Generalization of Neural Policies Trained in Synthetic Environments</title>
<link href="https://hdl.handle.net/1721.1/156571" rel="alternate"/>
<author>
<name>Quach, Alex H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156571</id>
<updated>2024-09-04T04:01:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Robust Scene and Object Generalization of Neural Policies Trained in Synthetic Environments
Quach, Alex H.
Achieving generalization for autonomous robotic systems operating in real-world environments remains a significant challenge. Training robots solely in simulations can be limiting due to the "sim-to-real gap"– discrepancies between simulated and real-world conditions. We present two novel approaches to enhance the generalization capabilities of autonomous quadrotor navigation systems when transferring from simulation to the real world. Our f irst approach integrates a 3D Gaussian Splatting radiance field with a quadrotor flight dynamics engine to generate high-quality, photorealistic training data. We design imitation learning schemes to train liquid time-constant neural networks on this data. Through rigorous evaluations, we demonstrate successful zero-shot transfer of the learned navigation policies from simulation to real-world flight, exhibiting generalization to complex, multi-step tasks in novel indoor and outdoor environments. Notably, we showcase autonomous quadrotor policies trained entirely in simulation that can be directly deployed in the real world without fine-tuning. Our method leverages the complementary strengths of photorealistic rendering and irregularly time-sampled data augmentation for enhancing generalization with liquid neural networks. Additionally, we compose off-the-shelf vision-and-language models with neural policies, enabling real-world generalization to complex objects and instructions unseen during training. To the best of our knowledge, this is the first report of zero-shot sim-to-real transfer and semantic generalization for autonomous quadrotor navigation using imitation learning. Our key contributions include: (1) a dynamics-augmented Gaussian splatting simulator, (2) implicit closed-loop augmentation via expert trajectory design, (3) robustifying liquid neural networks through irregularly sampled data, (4) extensive simulation and real-world validation, (5) demonstrating zero-shot real-world transfer capabilities, and (6) enabling zero-shot instruction generalization to novel objects using multimodal representations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/156570" rel="alternate"/>
<author>
<name>Gupta, Diptasri</name>
</author>
<id>https://hdl.handle.net/1721.1/156570</id>
<updated>2024-09-04T03:02:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence
Gupta, Diptasri
This thesis investigates the impact of automation and advanced technologies, specifically focusing on Large Language Models (LLMs), on traditional employment structures in the modern workplace. Historically, the master-apprentice model has been integral to vocational training across various industries, facilitating the transfer of knowledge, skills, and professional ethics from one generation to the next. However, the rise of AI and machine learning challenges the viability of this model, raising critical questions about the nature and quality of mentorship and skill acquisition in work environments. Part of a broader research initiative led by Professors Atkin, Li, and Beraja, this study explores the hypothesis that apprentices promoted without foundational mentorship may struggle in their advanced roles, potentially reducing long-term productivity gains from AI. Utilizing a comprehensive dataset from Brazilian Social Security records (RAIS) spanning 2003-2015, the research focuses on industries with a clear apprentice-master dynamic, such as finance, legal, and insurance sectors. By analyzing job code changes and pay adjustments, the study aims to correlate technological influx within companies with the productivity of workers promoted to master roles, using pay as a proxy for productivity. Findings indicate that while technological influx does not significantly affect immediate post-promotion wages, it negatively impacts wages one and two years after promotion, suggesting potential wage stagnation or reduction. Additionally, technological influx initially increases promotion likelihood and stabilizes employee retention, though longer-term effects are less clear. These results imply that apprentices are more likely to be promoted and retained in the short term but face reduced wage growth and potentially diminished performance. The study concludes that technological advancements can alter the traditional apprenticeship model, affecting skill acquisition and long-term productivity. Recommendations are provided for educators, industry leaders, and policymakers on optimizing apprenticeship models in an increasingly automated world. Further research will involve AI-focused evaluations to observe the real-world impact of AI integration on team dynamics, productivity, and skill development, aiming to refine our understanding of its effects on employment structures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ICσOS: σOS for Intercloud Environments</title>
<link href="https://hdl.handle.net/1721.1/156569" rel="alternate"/>
<author>
<name>Chen, Kevin S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156569</id>
<updated>2024-09-04T03:42:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">ICσOS: σOS for Intercloud Environments
Chen, Kevin S.
The cloud computing market offers myriad service offerings with diverse performance guarantees, yet tenants who want to explore this diversity are often punished for doing so: vendor lock-in and the lack of cross-cloud compatibility make it difficult for tenants to migrate their workloads to other clouds, or to utilize multiple clouds in an interconnected manner. This thesis presents ICσOS, an intercloud operating system that enables tenants to interact with multiple clouds’ infrastructure as a single interconnected system with minimal additional management overhead. ICσOS extends σOS— a cloud operating system that provides per-tenant namespaces via the novel realm abstraction — with intercloud features, and leverages namespaces to allow tenants to perform intercloud communication, service discovery, workload placement, coordination, and more without regard to cluster-level management details. ICσOS also introduces placement policies, a framework for intercloud workload placement that enables tenants to express fine-grained placement criteria that can be dynamically updated as applications run. An evaluation of ICσOS and placement policies on a distributed image-resizing application demonstrates ICσOS’s capabilities as an intercloud platform, as well as its ability to quickly and effectively respond to situations where intercloud placement behavior changes frequently.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determining Optimal Halftone Angles for CMYK Printing</title>
<link href="https://hdl.handle.net/1721.1/156568" rel="alternate"/>
<author>
<name>Monsalve Rodriguez, Catalina</name>
</author>
<id>https://hdl.handle.net/1721.1/156568</id>
<updated>2024-09-04T03:31:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Determining Optimal Halftone Angles for CMYK Printing
Monsalve Rodriguez, Catalina
CMYK halftone prints are all around us, yet the halftone angles used to generate these prints are traditionally set to specific values without substantial documentation explaining why these angles are optimal. Investigating the optimization of these angles is important for enhancing print quality and minimizing visual artifacts, which can significantly impact the visual appeal and accuracy of printed materials especially in low-resolution printing such as relief and screen printing techniques. This research investigates optimal halftone angles under low-resolution. The algorithm for this system generates low-resolution images from an input image, aiming to cover the full range of permutations of possible angles for each halftone in discrete variations of 15◦. We performed this on a various range of input images and computed a similarity score between each output image and its original input image, to assess a specific angle permutation’s performance. The study led to the formulation and validation of two hypotheses: 1) images with distinct halftone angles for each color channel generally achieve higher similarity scores than those with repeated angles; 2) permutations with the black halftone oriented at 0◦ benefits images with a high prevalence of black pixels. This thesis contributes to understanding halftone angle optimization in CMYK printing, offering practical guidelines for improving print quality and reducing visual artifacts, thus benefiting the printing industry and its diverse applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Large Language Models (LLMs) for Automated Extraction and Processing of Complex Ordering Forms</title>
<link href="https://hdl.handle.net/1721.1/156567" rel="alternate"/>
<author>
<name>Daqqah, Bilal H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156567</id>
<updated>2024-09-04T03:09:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Leveraging Large Language Models (LLMs) for Automated Extraction and Processing of Complex Ordering Forms
Daqqah, Bilal H.
Data extraction from business documents is a critical but under-exploited area capable of unlocking significant value from vast document archives. Traditional methods relying on manual intervention or outsourcing are inefficient, error-prone, and costly, and commercial Deep Learning-based and OCR solutions still struggle with highly unstructured documents. This thesis explores the use of Large Language Models (LLMs) to automate the extraction and processing of ordering forms and procurement documents in collaboration with SiliconExperts. These documents contain complex codes used in electronic component procurement, which guide the manufacture and specification of parts. We developed an end-to-end pipeline comprising four key modules: Page Classification, OCR and Table Extraction, LLM Inference, and Code Combination Generation. Two approaches for key-value extraction were compared: one-shot prompting with in-context learning using GPT-4 Turbo with Vision (GPT-4V) and a fine-tuned GPT-3.5 model, in which the GPT-4V approach demonstrated superior performance. The pipeline effectively generated correct code combinations with high accuracy, although data quality issues impacted precision and performance. This research highlights the potential of LLMs to transform document processing workflows, bridging the gap between academic advancements and practical business applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Cycle-Level Verification of Constant-Time Cryptography</title>
<link href="https://hdl.handle.net/1721.1/156566" rel="alternate"/>
<author>
<name>Xu, Jessica Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156566</id>
<updated>2024-09-04T03:22:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Cycle-Level Verification of Constant-Time Cryptography
Xu, Jessica Y.
Cryptographic primitives–hash functions, symmetric key encryption algorithms, asymmetric key exchange algorithms, and more–are used everywhere to achieve security in modern computing. Since these algorithms have complicated, math-heavy implementations, they are typically used through cryptographic library functions. However, many timing side-channel attacks, which leak information when execution time depends on secrets, have been found in popular cryptographic libraries, such as OpenSSL. Formal verification aims to rule out timing side channels in cryptographic software. This thesis presents Quake, a framework for verifying cryptographic library functions are constant-time for a specific hardware implementation, regardless of where the code is located in memory. Quake represents the location of code in memory using symbolic addresses and introduces a ROM model that gets concrete memory data from symbolic addresses. This thesis evaluates Quake and demonstrates that it can detect address-dependent timing behavior and does so in a reasonable amount of time.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Induction Drive for a Magnetically Levitated Control Moment Gyroscope</title>
<link href="https://hdl.handle.net/1721.1/156565" rel="alternate"/>
<author>
<name>Gershon, Levi</name>
</author>
<id>https://hdl.handle.net/1721.1/156565</id>
<updated>2024-09-04T03:55:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hybrid Induction Drive for a Magnetically Levitated Control Moment Gyroscope
Gershon, Levi
In order to support future astronautical missions, in light of the rapid growth of miniaturized smaller satellites, lower jitter and higher torque density multi-axis attitude control systems (ACS) will be needed [1]. This thesis aims to create a hybrid-drive reaction sphere with a spherical 1.5” diameter, diametrically magnetized grade N42 NdFeB rotor. A permanent magnet drive is used for vertical translation control, and an induction drive is used to spin the magnet about its axis of magnetization. Future work can then add more axes of permanent magnet drives to enable control about the other translation and rotation axes, such as was done in [2], setting the stage for full six-axis control of a monolithic rotor.&#13;
In this work, analytic models for both magnetic levitation of the rotor and the dipole-field induction are developed, leveraging previously reported models. Additionally, the gyroscopic precession potentially induced by a rotating dipole field is analyzed and determined to be negligible. A benchtop prototype of the system was designed, fabricated, and assembled, where a solenoid is used to magnetically levitate the rotor using its magnetization, and an induction motor is used to spin the rotor about its axis of magnetization. An optical sensor previously developed for position sensing was adapted for spin measurements at high speeds by creating an optical encoder pattern on the rotor. Speeds up to 401 RPM and a torque up to 3.0 μNm were measured, with no significant nutation observed, indicating such a hybrid drive may be a viable architecture for future reaction sphere ACS designs requiring both rotor simplicity and 6 axes of control.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory-Scale Thermal Energy Grid Storage (TEGS) Prototype</title>
<link href="https://hdl.handle.net/1721.1/156562" rel="alternate"/>
<author>
<name>Buznitsky, Kyle Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/156562</id>
<updated>2024-09-04T04:03:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Laboratory-Scale Thermal Energy Grid Storage (TEGS) Prototype
Buznitsky, Kyle Joseph
Grid-scale long duration energy storage will be necessary to maintain grid reliability in the US and beyond as intermittent renewables become the dominant source of electricity generation. An appealing long duration energy storage technology is thermal energy storage due to its low energy-based cost. One embodiment of thermal energy storage is the thermal energy grid storage (TEGS) concept, which is an envisioned graphite-based thermal energy storage system cycling between 1900-2400°C. Such a system would pump molten tin as a heat transfer fluid and use thermophotovoltaics to convert the thermal energy back to electricity. While many of these individual components have been demonstrated in isolation, there has yet to be a system which combines all these technologies into a working prototype. The focus of this work is creating this prototype and operating it at an intermediate temperature to uncover and overcome any system integration challenges that arise. In this work, a laboratory-scale TEGS prototype was designed and tested at temperatures up to 1000°C, uncovering challenges that are applicable to many high-temperature processes. By doing so, this work hopes to identify design criteria for similar high-temperature systems that must overcome some of the same challenges.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Force Feedback and Tactile Sensing for Robotic Teleoperation of Contact Rich Manipulation Tasks</title>
<link href="https://hdl.handle.net/1721.1/156561" rel="alternate"/>
<author>
<name>Karpoor, Shreya S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156561</id>
<updated>2024-09-04T04:05:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Force Feedback and Tactile Sensing for Robotic Teleoperation of Contact Rich Manipulation Tasks
Karpoor, Shreya S.
Imitation learning has shown promising results in teaching robots new skills. We propose augmenting the ALOHA bimanual teleoperation system with haptic feedback to obtain higher quality expert demonstrations. We add two types of haptic feedback: force feedback and cutaneous feedback in both a real and simulation teleoperation system. Additionally, we propose to add tactile sensors to observe the impact of tactile data to imitation learning models in solving fine manipulation tasks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Volterra System Analysis for an Electrochemical Sensor</title>
<link href="https://hdl.handle.net/1721.1/156552" rel="alternate"/>
<author>
<name>Iqbal, Billal</name>
</author>
<id>https://hdl.handle.net/1721.1/156552</id>
<updated>2024-09-04T03:07:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Volterra System Analysis for an Electrochemical Sensor
Iqbal, Billal
Current biological methods for quantifying bacterial and fungal populations are time and labour intensive, whilst remaining expensive to automate. A potential solution to this problem is an electrochemical sensor, which applies a stochastic voltage across a liquid medium and measures the resultant current flow. This data can then be used to model the liquid’s electrochemical interactions and monitor it for bacterial growth and spoilage. Linear dynamic impedance models have previously been explored for this. However, the ability to capture the nonlinear effects observed at higher voltages can provide greater insights into the liquid’s properties. This is extremely difficult with neural networks which offer accurate predictive capabilities without much insight into the system. A different strategy is to model the liquid using a Volterra series representation. This work will document the integration of Volterra system identification capabilities within the sensor and its performance when modelling different liquid media as well as modifications made to the sensor for the applications tested.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The synthesis of ethylene urea</title>
<link href="https://hdl.handle.net/1721.1/156420" rel="alternate"/>
<author>
<name>Hansen, Floyd Allan.</name>
</author>
<id>https://hdl.handle.net/1721.1/156420</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1939-01-01T00:00:00Z</published>
<summary type="text">The synthesis of ethylene urea
Hansen, Floyd Allan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1939; Includes bibliographical references (leaves 32-33).
</summary>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vapor formation of benzene in an immersed orifice</title>
<link href="https://hdl.handle.net/1721.1/156419" rel="alternate"/>
<author>
<name>Yeh, Hsuan.</name>
</author>
<author>
<name>Zhao, Yaodong.</name>
</author>
<id>https://hdl.handle.net/1721.1/156419</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1943-01-01T00:00:00Z</published>
<summary type="text">Vapor formation of benzene in an immersed orifice
Yeh, Hsuan.; Zhao, Yaodong.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1943; Includes bibliographical references (leaves 25-26).
</summary>
<dc:date>1943-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement of the characteristics of small waves</title>
<link href="https://hdl.handle.net/1721.1/156362" rel="alternate"/>
<author>
<name>Allen, John U.</name>
</author>
<author>
<name>Michel, John F.</name>
</author>
<id>https://hdl.handle.net/1721.1/156362</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Measurement of the characteristics of small waves
Allen, John U.; Michel, John F.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1947; Bibliography: leaf 50.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial robustness without perturbations</title>
<link href="https://hdl.handle.net/1721.1/156344" rel="alternate"/>
<author>
<name>Rodríguez Muñoz, Adrán</name>
</author>
<id>https://hdl.handle.net/1721.1/156344</id>
<updated>2024-08-22T03:08:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adversarial robustness without perturbations
Rodríguez Muñoz, Adrán
Models resistant to adversarial perturbations are stable around the neighbourhoods of input images, such that small changes, known as adversarial attacks, cannot dramatically change the prediction. Currently, this stability is obtained with Adversarial Training, which directly teaches models to be robust by training on the perturbed examples themselves. In this work, we show the surprisingly similar performance of instead regularizing the model input-gradients of un-perturbed examples only. Regularizing the input-gradient norm is commonly believed to be significantly worse than Adversarial Training. Our experiments determine that the performance of Gradient Norm critically depends on the smoothness of the activation functions of the model, and is in fact highly peformant on modern vision transformers that natively use smooth GeLU over piecewise linear ReLUs. On ImageNet-1K, Gradient Norm regularization achieves more than 90% of the performance of state-of-the-art Adversarial Training with PGD-3 (52% vs. 56%) with 60% of the training time and without complex inner-maximization. Further experiments shed light on additional properties relating model robustness and input-gradients of unperturbed images, such as asymmetric color statistics. Suprisingly, we also show significant adversarial robustness may be obtained by simply conditioning gradients to focus on image edges, without explicit regularization of the norm.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Score Distillation via DDIM Inversion</title>
<link href="https://hdl.handle.net/1721.1/156343" rel="alternate"/>
<author>
<name>Lukoianov, Artem S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156343</id>
<updated>2024-08-22T03:55:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Score Distillation via DDIM Inversion
Lukoianov, Artem S.
While 2D diffusion models generate realistic, high-detail images, 3D shape generation methods like Score Distillation Sampling (SDS) built on these 2D diffusion models produce cartoon-like, over-smoothed shapes. To help explain this discrepancy, in this paper we prove that the image guidance used in Score Distillation can be understood as the velocity field of a 2D denoising generative process, up to the choice of a noise term. In particular, after a change of variables, SDS resembles a high-variance version of Denoising Diffusion Implicit Models (DDIM) with a differently-sampled noise term: SDS introduces noise i.i.d. randomly at each step, while DDIM infers it from the previous noise predictions. This excessive variance can lead to over-smoothing and prevent the algorithm from generating realistic outputs. We show that a better noise approximation can be recovered by inverting DDIM in each SDS update step. This modification makes SDS’s generative process for 2D images identical to DDIM, up to our change of variables. In 3D, it removes over-smoothing, preserves higher-frequency detail, and brings the generation quality closer to that of 2D samplers. Experimentally, our method achieves better or similar 3D generation quality compared to other works that improve SDS, all without training additional neural networks or 3D supervision. Our findings bridge the gap between 2D and 3D asset generation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep and Dynamic Metabolic and Structural Imaging in Living Tissues</title>
<link href="https://hdl.handle.net/1721.1/156342" rel="alternate"/>
<author>
<name>Liu, Kunzan</name>
</author>
<id>https://hdl.handle.net/1721.1/156342</id>
<updated>2024-08-22T03:36:34Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Deep and Dynamic Metabolic and Structural Imaging in Living Tissues
Liu, Kunzan
Label-free imaging through two-photon autofluorescence (2PAF) of NAD(P)H allows for non-destructive and high-resolution visualization of cellular activities in living systems. However, its application to thick tissues and organoids has been restricted by its limited penetration depth within 300µm, largely due to tissue scattering at the typical excitation wavelength (∼750nm) required for NAD(P)H. Here, we demonstrate that the imaging depth for NAD(P)H can be extended to over 700µm in living engineered human multicellular microtissues by adopting multimode fiber (MMF)-based low-repetition-rate high-peak-power three-photon (3P) excitation of NAD(P)H at 1100nm. This is achieved by having over 0.5MW peak power at the band of 1100±25nm through adaptively modulating multimodal nonlinear pulse propagation with a compact fiber shaper. Moreover, the 8-fold increase in pulse energy at 1100nm enables faster imaging of monocyte behaviors in the living multicellular models. These results represent a significant advance for deep and dynamic metabolic and structural imaging of intact living biosystems. The modular design (MMF with a slip-on fiber shaper) is anticipated to allow wide adoption of this methodology for demanding in vivo and in vitro imaging applications, including cancer research, autoimmune diseases, and tissue engineering.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Players with Bounded Randomness Capabilities</title>
<link href="https://hdl.handle.net/1721.1/156341" rel="alternate"/>
<author>
<name>Orzech, Edan</name>
</author>
<id>https://hdl.handle.net/1721.1/156341</id>
<updated>2024-08-22T04:00:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Players with Bounded Randomness Capabilities
Orzech, Edan
In this thesis I study the effect of bounded randomness capabilities on the outcomes of games, and their payoffs to the players. I study this subject from two perspectives. The first perspective is the ability to share randomness across team members playing against an opposing team. The second perspective is the capability to store the underlying distribution of the mixed strategy a player intends to play.&#13;
&#13;
The first perspective is the ability to share randomness across team members playing against an opposing team. I consider team zero-sum network congestion games played between a team of n agents and a team of k interceptors over a graph G.&#13;
The agents aim to minimize their collective cost of sending traffic over paths, while the interceptors aim to maximize the collective cost by adding tolls or congestion to road segments. I consider two cases, the correlated case where agents have access to a shared source of randomness, and the uncorrelated case, where each agent has access to only its own source of randomness. I show that the additional cost that the agents have to incur due to being unable to share random bits is bounded by O(min(m_c(G),n)), where m_c(G) is the mincut size of G.&#13;
&#13;
The second perspective is the capability to store the underlying distribution of the mixed strategy a player intends to play. I define a measure of the complexity of finite probability distributions and study the complexity required to play Nash equilibria in finite two-player n times n games with rational payoffs.  &#13;
My central results show that there exist games in which there is an exponential vs. linear gap in the complexity of the mixed distributions that the two players play in the (in these games unique) Nash equilibrium of these games. This gap induces asymmetries in the amounts of space required by players to represent and sample from the corresponding distributions using known state-of-the-art sampling algorithms. I also establish exponential upper and lower bounds on the complexity of Nash equilibria in normal-form games.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Peer-to-Peer Group Communication for City-Scale Mesh Networks</title>
<link href="https://hdl.handle.net/1721.1/156340" rel="alternate"/>
<author>
<name>Sussman, William A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156340</id>
<updated>2024-08-22T03:22:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Peer-to-Peer Group Communication for City-Scale Mesh Networks
Sussman, William A.
The Internet has become extremely centralized. The benefits of centralization have thus far outweighed the drawbacks, but users today are much more concerned about privacy, and reachability is increasingly threatened by natural disasters, political repression, cyberattacks, and human error. CityMesh provides an answer to this problem, constructing a decentralized mesh network out of wireless access points. To test our unicast routing protocol, we built a discrete-event network simulator using SimPy. However, we make several simplifying assumptions, and unicast is not sufficient for many applications. In this thesis, I show that our simulator nevertheless achieves 67.4% correlation with real data that we collected, and I generalize our simulator for multicast. Specifically, I compose our unicast primitive into multicast trees using three different topologies, and surprisingly find that Steiner trees perform worse than minimum spanning trees on average.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examining LLMs in Economic Settings</title>
<link href="https://hdl.handle.net/1721.1/156339" rel="alternate"/>
<author>
<name>Ross, Jillian A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156339</id>
<updated>2024-08-22T03:40:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Examining LLMs in Economic Settings
Ross, Jillian A.
Humans are not homo economicus (i.e., rational economic beings). We exhibit systematic behavioral biases such as loss aversion, anchoring, framing, etc., which lead us to make suboptimal economic decisions. Insofar as such biases may embedded in text data on which large language models (LLMs) are trained, to what extent are LLMs prone to the same behavioral biases? Understanding these biases in LLMs is crucial for deploying LLMs to support human decision-making. To enable the responsible deployment of LLMs, I propose economic alignment. Economic alignment is a specific form of AI alignment that provides a critical perspective to interrogate what human preferences we would like to incorporate into LLM decisions. To illustrate the power of economic alignment, I systematically study the economic decision-making behaviors of LLMs through utility theory, a paradigm at the core of modern economic theory. I apply experimental designs from human studies to LLMs and find that they are neither entirely human-like nor entirely economicus-like. Specifically, I find that LLMs generally exhibit stronger inequity aversion, stronger loss aversion, weaker risk aversion, and stronger time discounting compared to human subjects. I further find that most LLMs struggle to maintain consistent economic behavior across settings. Finally, I present a case study that examines how we can intervene through prompting to better align LLMs with economic goals.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Sleep Assessment from Nocturnal Breathing and Its Applications for Contactless Monitoring</title>
<link href="https://hdl.handle.net/1721.1/156338" rel="alternate"/>
<author>
<name>Li, Chao</name>
</author>
<id>https://hdl.handle.net/1721.1/156338</id>
<updated>2024-08-22T03:21:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automatic Sleep Assessment from Nocturnal Breathing and Its Applications for Contactless Monitoring
Li, Chao
The ability to assess sleep at home, capture sleep stages, and detect the occurrence of apnea (without on-body sensors) simply by analyzing the radio waves bouncing off people’s bodies while they sleep is quite powerful. Such a capability would allow for longitudinal data collection in patients’ homes, informing our understanding of sleep and its interaction with various diseases and their therapeutic responses, both in clinical trials and routine care. In this work, we develop an advanced machine-learning algorithm for passively monitoring sleep and nocturnal breathing from radio waves reflected off people while asleep. Validation results in comparison with the gold standard (i.e., polysomnography) (n=849) demonstrate that the model captures the sleep hypnogram (with an accuracy of 81% for 30-second epochs categorized into Wake, Light Sleep, Deep Sleep, or REM), detects sleep apnea (AUROC = 0.88), and measures the patient’s Apnea-Hypopnea Index (ICC=0.95; 95% CI = [0.93, 0.97]). Notably, the model exhibits equitable performance across race, sex, and age. Moreover, the model uncovers informative interactions between sleep stages and a range of diseases including neurological, psychiatric, cardiovascular, and immunological disorders. These findings not only hold promise for clinical practice and interventional studies but also underscore the significance of sleep as a fundamental component in understanding and managing various diseases.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Offline Reward Learning from Human Demonstrations and Feedback: A Linear Programming Approach</title>
<link href="https://hdl.handle.net/1721.1/156337" rel="alternate"/>
<author>
<name>Kim, Kihyun</name>
</author>
<id>https://hdl.handle.net/1721.1/156337</id>
<updated>2024-08-22T03:30:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Offline Reward Learning from Human Demonstrations and Feedback: A Linear Programming Approach
Kim, Kihyun
In many complex sequential decision-making tasks, there is often no known explicit reward function, and the only information available is human demonstrations and feedback data. To infer and shape the underlying reward function from this data, two key methodologies have emerged: inverse reinforcement learning (IRL) and reinforcement learning from human feedback (RLHF). Despite the successful application of these reward learning techniques across a wide range of tasks, a significant gap between theory and practice persists. This work aims to bridge this gap by introducing a novel linear programming (LP) framework tailored for offline IRL and RLHF. Most previous work in reward learning has employed the maximum likelihood estimation (MLE) approach, relying on prior knowledge or assumptions about decision or preference models. However, such dependencies can lead to robustness issues, particularly when there is a mismatch between the presupposed models and actual human behavior. In response to these challenges, recent research has shifted toward recovering a feasible reward set, a general set of rewards where the expert policy is optimal. In line with this evolving perspective, we focus on estimating the feasible reward set in an offline context. Utilizing pre-collected trajectories without online exploration, our framework estimates a feasible reward set from the primal-dual optimality conditions of a suitably designed LP, and offers an optimality guarantee with provable sample efficiency. One notable feature of our LP framework is the convexity of the resulting solution set, which facilitates the alignment of reward functions with human feedback, such as pairwise trajectory comparison data, while maintaining computational tractability and sample efficiency. Through analytical examples and numerical experiments, we demonstrate that our framework has the potential to outperform the conventional MLE approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Control and Information Exchange for Improved Flight Autonomy of Hybrid Powertrain Drones</title>
<link href="https://hdl.handle.net/1721.1/156336" rel="alternate"/>
<author>
<name>Kosanic, Miroslav</name>
</author>
<id>https://hdl.handle.net/1721.1/156336</id>
<updated>2024-08-22T03:06:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Distributed Control and Information Exchange for Improved Flight Autonomy of Hybrid Powertrain Drones
Kosanic, Miroslav
This work addresses integrating mechanical dynamics and powertrain energy conversion dynamics in Unmanned Aerial Vehicles (UAVs), focusing on hexacopters with hybrid powertrains. The goal is to maximize fuel savings and achieve that through powertrain regulation. One of factors that influence optimal internal combustion engine (ICE) operation is the passively managed battery, which should have the role of a fast supplementary power source. When the powertrain faces disturbances, ICE efficiency may decrease. The question is whether coordinated information exchange through distributed or decentralized control of the battery can outperform centralized powertrain control, which treats the battery as a disturbance in a component-isolated approach. The core contributions of this thesis include developing a novel modeling approach that integrates energy conversion dynamics with the mechanical dynamics of the drone. A second contribution of the thesis estimates parameters of nonlinear dynamics, using flight-mission data, and shows theoretical conditions for which the system exhibits time-scale separation. Using an average-parameter model, a composite Linear Quadratic Regulator (LQR) policy with predictive control was implemented and simulated during the cruise phase of flight phase, achieving 4.5% fuel savings by recognizing battery disturbances. This result from the centralized approach is compared to the thesis’s third contribution, distributed and decentralized control of the battery, where the two differ as decentralized control is achieved through the local information exchange, while distributed components can obtain needed information from components that they are not directly connected. Both approaches enable the increase of supplement power from the battery, reducing the demand impact on the generator and ICE and saving fuel. The distributed control is helping aggressively without proper coordination, ending up as non-cooperative control, as it doesn’t have information on what is the power that the generator needs. Decentralized approach receives the information of supplement power, and as coordination is embedded in this information coming from the generator, and achieves cooperative control. For the fully charged battery during the cruise phase of the flight, distributed saved approximately 34.56% of the initial fuel, while decentralized control saved 50.05% of the initial fuel in the reservoir.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mobile Underwater Backscatter Networking</title>
<link href="https://hdl.handle.net/1721.1/156335" rel="alternate"/>
<author>
<name>Wang, Purui</name>
</author>
<id>https://hdl.handle.net/1721.1/156335</id>
<updated>2024-08-22T04:01:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mobile Underwater Backscatter Networking
Wang, Purui
Underwater backscatter is a recently introduced technology for ultra-low-power underwater networking. Despite advances in this technology, existing systems are limited to static environments and cannot operate reliably under mobility. This thesis presents EchoRider, the first system that enables reliable underwater backscatter networking under mobility. EchoRider’s design introduces three new components. The first is a robust, chirp-based downlink protocol that brings the benefits of LoRa wireless networks to underwater backscatter, while accounting for the ultra-low-power nature of the backscatter sensor nodes. The second is a novel NACK-based backscatter retransmission algorithm, which enables reliable and efficient underwater backscatter. The third is a Doppler-resilient backscatter decoding pipeline on the uplink that features adaptive equalization, polar coding, and an equalizer retraining mechanism. We implemented an end-to-end prototype of EchoRider and compared it to a state-of-the-art baseline. Our evaluation across more than 1,200 real-world experimental trials in real-world environments demonstrates that EchoRider outperforms the state-of-the-art baseline by more than 160× in BER under mobility, and that it can sustain typical underwater goodput (around 0.5kbps) in scenarios where the baseline’s goodput drops to zero at speeds as low as 0.1m/s. Finally, we demonstrate EchoRider in an example application involving an underwater mobile drone and a backscatter sensor node.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MeMo: Meaningful, Modular Controllers via Noise Injection</title>
<link href="https://hdl.handle.net/1721.1/156334" rel="alternate"/>
<author>
<name>Tjandrasuwita, Megan</name>
</author>
<id>https://hdl.handle.net/1721.1/156334</id>
<updated>2024-08-22T03:15:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">MeMo: Meaningful, Modular Controllers via Noise Injection
Tjandrasuwita, Megan
Robots are often built from standardized assemblies, (e.g. arms, legs, or fingers), but each robot must be trained from scratch to control all the actuators of all the parts together. In this paper we demonstrate a new approach that takes a single robot and its controller as input and produces a set of modular controllers for each of these assemblies such that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers. We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers. Specifically, we propose a novel modularity objective to learn an appropriate division of labor among the modules. We demonstrate that this objective can be optimized simultaneously with standard behavior cloning loss via noise injection. We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer. We also show that the modules help in task transfer. On both structure and task transfer, MeMo achieves improved training efficiency to graph neural network and Transformer baselines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Next generation tools for smart electron microscopy</title>
<link href="https://hdl.handle.net/1721.1/156333" rel="alternate"/>
<author>
<name>Sawmya, Shashata</name>
</author>
<id>https://hdl.handle.net/1721.1/156333</id>
<updated>2024-08-22T03:48:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Next generation tools for smart electron microscopy
Sawmya, Shashata
Smart Electron Microscopy (SmartEM) is a new generation EM imaging technology that promises to revolutionize microscopy. In this research, we explore the integration of advanced techniques to enhance this technology further. These include alternative characterization of high-resolution rescanning, cutting-edge vision models, incorporation of 3D information and vision transformers for improved neuronal segmentation and pipeline speedup. Our goal is to develop tools that improve the existing SmartEM pipeline, making it more versatile and effective for deployment in various practical settings.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Deployment Algorithms for Large Language Models</title>
<link href="https://hdl.handle.net/1721.1/156332" rel="alternate"/>
<author>
<name>Xiao, Guangxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/156332</id>
<updated>2024-08-22T03:30:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Efficient Deployment Algorithms for Large Language Models
Xiao, Guangxuan
Large language models (LLMs) have achieved impressive performance on various natural language tasks. However, their massive computational and memory requirements hinder widespread deployment. Additionally, deploying them on extensive inputs presents efficiency and accuracy challenges.&#13;
This proposal introduces two techniques to enable efficient and accurate quantization and streaming deployment of LLMs, facilitating their application in real-world systems with limited resources. First, we develop SmoothQuant, an accurate post-training 8-bit quantization method of both weights and activations in LLMs up to 530B parameters. By smoothing outliers in activations, SmoothQuant enables the use of efficient INT8 kernels on all matrix multiplications with negligible accuracy loss. Second, we present StreamingLLM, enabling LLMs to handle arbitrarily long text sequences using a fixed memory budget. It exploits ``attention sinks'' in LLMs to stably anchor attention computation on lengthy contexts. Experiments show StreamingLLM can model over 4 million tokens with up to 22x speedup compared to recomputation baselines. &#13;
Together, these two techniques can significantly reduce the computational and memory costs of large language models, increasing their accessibility for practical usage.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Objective Bayesian Optimization with Asynchronous Batch Selection</title>
<link href="https://hdl.handle.net/1721.1/156331" rel="alternate"/>
<author>
<name>Zuniga, Ane</name>
</author>
<id>https://hdl.handle.net/1721.1/156331</id>
<updated>2024-08-22T03:25:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-Objective Bayesian Optimization with Asynchronous Batch Selection
Zuniga, Ane
Multi-objective optimization problems are widespread in scientific, engineering, and design f ields, necessitating a balance of trade-offs between conflicting objectives. These objectives often represent black-box functions, which are costly and time-consuming to evaluate. Multiobjective Bayesian optimization (MOBO) offers a valuable approach to guide the search for optimal solutions. To enhance efficiency, batch evaluations are employed to test multiple samples simultaneously, aiming to further reduce evaluation times. However, in scenarios involving varying evaluation times, standard batch strategies often lead to suboptimal resource utilization and inefficiencies. Asynchronous evaluations emerge as a promising solution to optimize resource usage under these conditions. Despite their potential, there has been no prior work or method specifically tailored to address asynchronous evaluations within the MOBO framework. To bridge this critical gap, this thesis proposes a comprehensive adaptation and analysis of existing Bayesian optimization methods for asynchronous MOBO scenarios. It also introduces a novel selection strategy, α-HVI, empirically validated through tests on both synthetic and real-world functions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Causal Structure Learning through Double Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/156327" rel="alternate"/>
<author>
<name>Soleymani, Ashkan</name>
</author>
<id>https://hdl.handle.net/1721.1/156327</id>
<updated>2024-08-22T03:39:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Causal Structure Learning through Double Machine Learning
Soleymani, Ashkan
Learning the causal structure of a system solely from observational data is a fundamental yet intricate task with numerous applications across various fields, including economics, earth sciences, biology, and medicine. This task is challenging due to several reasons: i) observational data alone, as opposed to interventional data, do not characterize the properties of the system under interventions on variables; therefore, it contains information on correlation instead of cause-effect one, ii) unobserved confounders may induce biases for the algorithms, leading to false causal inferences instead of revealing the correct causal structure, like a hidden common confounder, iii) the number of potential underlying structures increases super-exponentially with the number of variables, posing significant statistical and computational challenges and iv) the identifiability problem arises because multiple causal models can yield the same observational distribution, making it impossible to conclusively determine the true structure. In this thesis, we focus on the partial identification of underlying p causal structure from observational data under minimal assumptions necessary for causal identification. To this end, inspired by Debiasd/Double machine learning machinery, we introduce efficient practical doubly robust algorithms enjoying fast √n-semiparametric convergence rate for three different tasks: (1) Finding the direct causes of the target variable under cyclic and unseen confounded high dimensional data with nonlinear structures, (2) Testing Granger causality and therefore causal structure identification from temporal data, and (3) Estimation of counterfactual prediction function in generalized nonlinear Instrumental Variables regression problem. As a natural use case, we tackle the offline policy evaluation of the confounded contextual bandit problem, when actions, contexts, and rewards have common unobserved confounding. By matching the upper bounds with the unconfounded contextual bandit settings, our algorithm is proven to achieve optimal sample complexity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward In-Context Teaching</title>
<link href="https://hdl.handle.net/1721.1/156326" rel="alternate"/>
<author>
<name>Ross, Alexis J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156326</id>
<updated>2024-08-22T04:07:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Toward In-Context Teaching
Ross, Alexis J.
When a teacher provides examples for a student to study, these examples must be informative, enabling a student to progress from their current state toward a target concept or skill. Good teachers must therefore simultaneously infer what students already know and adapt their teaching to students’ changing state of knowledge. There is increasing interest in using computational models, particularly large language models, as pedagogical tools. As students, language models in particular have shown a remarkable ability to adapt to new tasks given small numbers of examples. But how effectively can these models adapt as teachers to students of different types? To study this question, we introduce a suite of models and evaluation methods we call AdapT. AdapT has two components: (1) a collection of simulated Bayesian student models that can be used for evaluation of automated teaching methods; (2) a platform for evaluation with human students, to characterize the real-world effectiveness of these methods. We additionally introduce (3) AToM, a new probabilistic method for adaptive teaching that jointly infers students’ past beliefs and optimizes for the correctness of future beliefs. In evaluations of simulated students across three learning domains (fraction arithmetic, English morphology, function learning), AToM systematically outperforms LLM-based and standard Bayesian teaching models. In human experiments, both AToM and LLMs outperform non-adaptive random example selection. Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dialogue-driven Multi-Agent Activity Planning</title>
<link href="https://hdl.handle.net/1721.1/156325" rel="alternate"/>
<author>
<name>Sonar, Anoopkumar S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156325</id>
<updated>2024-08-22T03:53:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dialogue-driven Multi-Agent Activity Planning
Sonar, Anoopkumar S.
A fundamental challenge in robotics is to build a general-purpose system with multiple agents that can perform a wide range of tasks based on specifications provided in natural language. This work presents a novel dialogue-driven activity planning framework for multiagent scenarios. We present a method that accepts commands from a user in natural language and translates it to an intermediate form called a state plan by leveraging large language models. We further experiment with chain-of-thought prompting to improve the translation from natural language to state plans. In conjunction with an action model, this state plan is utilized by a constraint-based generative planner called ctBurton which outputs a full grounded plan in the form of a state and control trajectory. We demonstrate the utility of our method across three different scenarios– a presentation system, search-and-rescue, and multi-agent assembly– along with experiments on its scalability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control</title>
<link href="https://hdl.handle.net/1721.1/156324" rel="alternate"/>
<author>
<name>Pfrommer, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/156324</id>
<updated>2024-08-22T03:31:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control
Pfrommer, Daniel
Recent work in imitation learning has shown that having an expert controller that is both suitably smooth and stable enables much stronger guarantees on the performance of the approximating learned controller. Constructing such smoothed expert controllers for arbitrary systems remains challenging, especially in the presence of input and state constraints. We show how such a smoothed expert can be designed for a general class of systems using a log-barrier-based relaxation of a standard Model Predictive Control (MPC) optimization problem. Our principal theoretical contributions include (1) demonstrating that the Jacobian of the barrier MPC controller can be written as a convex combination of pieces arising from the explicit MPC formulation, (2) bounding the Hessian of the barrier MPC as a function of the strength of the barrier function, and (3) presenting new results in both matrix and convex analysis for computing perturbed adjugate matrices and a tight (up to constant) lower bound on the distance of a solution with a self-concordant-barrier to the constraint set. We consider randomized smoothing as a point of comparison and show empirically that, unlike randomized smoothing, barrier MPC yields better performance while guaranteeing constraint satisfaction.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitive Multiplexed MicroRNA Spatial Profiling and Data Classification Framework Applied to Murine Breast Tumors</title>
<link href="https://hdl.handle.net/1721.1/156323" rel="alternate"/>
<author>
<name>Mohd, Omar Nazmi</name>
</author>
<id>https://hdl.handle.net/1721.1/156323</id>
<updated>2024-08-22T03:38:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sensitive Multiplexed MicroRNA Spatial Profiling and Data Classification Framework Applied to Murine Breast Tumors
Mohd, Omar Nazmi
MicroRNAs (miRNAs) are small RNAs that are often dysregulated in many diseases, including cancers. They are highly tissue specific and stable, thus making them particularly useful as biomarkers. As the spatial transcriptomics field advances, protocols that enable highly sensitive and spatially resolved detection become necessary to maximize the information gained from samples. This is especially true of miRNAs where the location of where they are expressed within tissue can provide prognostic value with regards to patient outcome. Equally as important as detection are ways to assess and visualize the miRNA’s spatial information in order to leverage the power of spatial transcriptomics over that of traditional non-spatial bulk assays. We present a highly sensitive methodology that simultaneously quantitates and spatially detects seven miRNAs in situ on formalin-fixed paraffin embedded tissue sections. This method utilizes rolling circle amplification (RCA) in conjunction with a dual scanning approach in nanoliter well arrays with embedded hydrogel posts. The hydrogel posts are functionalized with DNA-probes that enable the detection of miRNAs across a large dynamic range (four orders of magnitude) and a limit of detection of 0.17 zeptomoles (1.7×10⁻⁴ attomoles). We applied our methodology coupled with a data analysis pipeline to K14-Cre Brca1 superscript f/f Tp53 superscript f/f murine breast tumors to showcase the information gained from this approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Charting EDA: Characterizing Interactive Visualization Use in Computational Notebooks with a Mixed-Methods Formalism</title>
<link href="https://hdl.handle.net/1721.1/156322" rel="alternate"/>
<author>
<name>Wootton, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/156322</id>
<updated>2024-08-22T03:06:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Charting EDA: Characterizing Interactive Visualization Use in Computational Notebooks with a Mixed-Methods Formalism
Wootton, Dylan
Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets with Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively coding participant utterances, we introduce a formalism that describes EDA as a sequence of analysis states, where each state is comprised of either a representation an analyst constructed (e.g., the output of a data frame, an interactive visualization, etc.) or an observation the analyst made with a representation (e.g., about missing data, the relationship between variables, etc.). By applying our formalism to our dataset, we are able to identify that interactive visualizations, on average, lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, by calculating metrics such as revisiting count and representational diversity, we are able to uncover that some representations serve more as as "planning aids" during EDA rather than tools strictly for hypothesis-answering. &#13;
We show how these measures helped identify other patterns of analysis behavior, such as the "80-20 rule", where a small subset of representations drove the majority of observations. Based on these findings, we offer design guidelines for interactive exploratory analysis tooling and reflect on future directions for studying the role that visualizations play in EDA.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Compact Hydraulic Head Auto-Regulating Module (CHARM) for Long-Term Constant Gravity-Driven Flow Microfluidics</title>
<link href="https://hdl.handle.net/1721.1/156321" rel="alternate"/>
<author>
<name>Xue, Fan</name>
</author>
<id>https://hdl.handle.net/1721.1/156321</id>
<updated>2024-08-22T03:00:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Compact Hydraulic Head Auto-Regulating Module (CHARM) for Long-Term Constant Gravity-Driven Flow Microfluidics
Xue, Fan
Gravity-driven flow is a simple microfluidic flow initiation and maintenance mechanism that requires no external power sources and low expertise to use. However, the driving forces created by hydraulic head differences gradually decrease during operation, resulting in unwanted decreased flow rates in many microfluidic applications. The existing methods to maintain a constant flow for gravity-driven mechanisms either require additional bulky control equipment, involve complex fabrication or operation, or introduce interfaces that lack robustness. To solve those problems, a compact hydraulic head auto-regulating module (CHARM) was designed and tested in this thesis. The module was able to maintain the liquid level at the microfluidic inlet port within a small fluctuation range without human intervention for a long operation time. The design’s compactness and its compatibility with the standard 96 well plates enable high-throughput operations, and the chosen material’s bio-compatibility allows the devices’ use on cell culture related applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Congestion Control in Machine Learning Clusters</title>
<link href="https://hdl.handle.net/1721.1/156313" rel="alternate"/>
<author>
<name>Rajasekaran, Sudarsanan</name>
</author>
<id>https://hdl.handle.net/1721.1/156313</id>
<updated>2024-08-22T03:58:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Congestion Control in Machine Learning Clusters
Rajasekaran, Sudarsanan
This paper argues that fair-sharing, the holy grail of congestion control algorithms for decades, is not necessarily a desirable property in Machine Learning (ML) training clusters. We demonstrate that for a specific combination of jobs, introducing unfairness improves the training time for all competing jobs. We call this specific combination of jobs compatible and define the compatibility criterion using a novel geometric abstraction. Our abstraction rolls time around a circle and rotates the communication phases of jobs to identify fully compatible jobs. Using this abstraction, we demonstrate up to 1.3× improvement in the average training iteration time of popular ML models. We advocate that resource management algorithms should take job compatibility on network links into account. We then propose three directions to ameliorate the impact of network congestion in ML training clusters: (i) an adaptively unfair congestion control scheme, (ii) priority queues on switches, and (iii) precise flow scheduling.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploiting Observation Bias to Improve Matrix Completion</title>
<link href="https://hdl.handle.net/1721.1/156312" rel="alternate"/>
<author>
<name>Park, Charlotte</name>
</author>
<id>https://hdl.handle.net/1721.1/156312</id>
<updated>2024-08-22T03:40:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploiting Observation Bias to Improve Matrix Completion
Park, Charlotte
We consider a variant of matrix completion where entries are revealed in a biased manner, adopting a model akin to that introduced by Ma &amp; Chen (2019) [1]. Instead of treating this observation bias as a disadvantage, as is typically the case, the goal is to exploit the shared information between the bias and the outcome of interest to improve predictions. Towards this, we consider a natural model where the observation pattern and outcome of interest are driven by the same set of underlying latent or unobserved factors. This leads to a two stage matrix completion algorithm: first, recover (distances between) the latent factors by utilizing matrix completion for the fully observed noisy binary matrix corresponding to the observation pattern; second, utilize the recovered latent factors as features and sparsely observed noisy outcomes as labels to perform non-parametric supervised learning. The f inite-sample error rates analysis suggests that, ignoring logarithmic factors, this approach is competitive with the corresponding supervised learning parametric rates. This implies the two-stage method has performance that is comparable to having access to the unobserved latent factors through exploiting the shared information between the bias and outcomes. Through empirical evaluation using a real-world dataset, we find that with this two-stage algorithm, the estimates have 30x smaller mean squared error compared to traditional matrix completion methods, suggesting the utility of the model and the method proposed in this work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Fabrication of High Frequency Electromagnetic Coil for Magnetic Particle Imaging</title>
<link href="https://hdl.handle.net/1721.1/156311" rel="alternate"/>
<author>
<name>Whittier, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/156311</id>
<updated>2024-08-22T04:04:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Fabrication of High Frequency Electromagnetic Coil for Magnetic Particle Imaging
Whittier, Elizabeth
Magnetic Particle Imaging (MPI) is a promising modality which uses Magnetic Nanoparticles (MNPs) for tracer-based imaging in biomedical applications. Aside from their use in imaging, MNPs are increasingly being utilized for therapeutics, controlled targeted drug delivery, and diagnostics. These techniques depend on the behavior of MNPs when exposed to alternating magnetic field of a certain frequency and amplitude. However, the frequency typically used for imaging is 25kHz, while the transduction behaviors desired for these biomedical applications are seen at low radio-frequencies and higher amplitude fields than ones used for imaging. This work presents a high frequency electromagnetic coil which fulfills operational, safety, and geometric parameters necessary for incorporation in a custom MPI system and will allow us to simultaneously image and stimulate at specific locations within the body of a mouse. Optimization of the instrument is done through experimentation and electromagnetic theory, with focuses on parasitic elements and metallurgical phenomena. A resonant tank and direct cooling with a water pump allows for increased field strength while maintaining thermal and radio-frequency energy absorption standards for in vivo experiments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Secure Discovery of Genetic Relatives across Large-Scale and Distributed Genomic Datasets</title>
<link href="https://hdl.handle.net/1721.1/156310" rel="alternate"/>
<author>
<name>Hong, Matthew M.</name>
</author>
<id>https://hdl.handle.net/1721.1/156310</id>
<updated>2024-08-22T03:42:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Secure Discovery of Genetic Relatives across Large-Scale and Distributed Genomic Datasets
Hong, Matthew M.
Finding relatives within a study cohort is a necessary step in many genomic studies. However, when the cohort is distributed across multiple entities subject to data-sharing restrictions, performing this step often becomes infeasible. Developing a privacy-preserving solution for this task is challenging due to the significant burden of estimating kinship between all pairs of individuals across datasets. In this thesis, we introduce SF-Relate, a practical and secure federated algorithm for identifying genetic relatives across data silos. SF-Relate vastly reduces the number of individual pairs to compare while maintaining accurate detection through a novel locality-sensitive hashing approach. We assign individuals who are likely to be related together into buckets and then test relationships only between individuals in matching buckets across parties. To this end, we construct an effective hash function that captures identity-by-descent (IBD) segments in genetic sequences, which, along with a new bucketing strategy, enable accurate and practical private relative detection. To guarantee privacy, we introduce an efficient algorithm based on multiparty homomorphic encryption (MHE) to allow data holders to cooperatively compute the relatedness coefficients between individuals, and to further classify their degrees of relatedness, all without sharing any private data. We demonstrate the accuracy and practical runtimes of SF-Relate on the UK Biobank and All of Us datasets. On a dataset of 200K individuals split between two parties, SF-Relate detects 94.9% of third-degree relatives, and 99.9% of second-degree or closer relatives, within 15 hours of runtime. Our work enables secure identification of relatives across large-scale genomic datasets, and thus a wide range of downstream privacy-preserving collaborative studies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Broadband single and multimode quantum light generation using optical nonlinearities</title>
<link href="https://hdl.handle.net/1721.1/156309" rel="alternate"/>
<author>
<name>Pontula, Sahil</name>
</author>
<id>https://hdl.handle.net/1721.1/156309</id>
<updated>2024-08-22T04:03:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Broadband single and multimode quantum light generation using optical nonlinearities
Pontula, Sahil
There is a growing effort in many fields of physics to bridge the classical and quantum realms. To our best understanding, our world is governed by the laws of quantum mechanics, but some of its most interesting features - such as the ability to morph uncertainty and noise - are washed out when system sizes become too large. Light is the ideal playground to investigate the interplay between the classical and quantum domains, with its well-known particle-wave duality and diverse behaviors at both the classical wave and single photon levels. To this end, there is significant interest in generating quantum states of light that can be harnessed for applications in the classical world we are most familiar with. However, maintaining "quantumness'' as the number of photons grows large has proved challenging due to the detrimental effects of loss. In this thesis, I describe two theoretical proposals to make macroscopic quantum light a reality. I focus on bright intensity squeezed states of light that have intensity noise far below the standard quantum limit. If realized, these states would bring the quantum mechanical phenomenon of squeezing to macroscopic intensities, which in turn could pave the way towards widespread quantum light sources that offer enhanced signal to noise ratios. I describe two distinct methods that use tools from nonlinear optics and dissipation engineering to realize broadband squeezing in both single and multiple frequency modes. I show that the squeezing can be tunable across a wide range of the electromagnetic spectrum that spans frequencies where quantum light has never been generated.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Toward Cell-Specific Nanoparticle Delivery Systems</title>
<link href="https://hdl.handle.net/1721.1/156308" rel="alternate"/>
<author>
<name>Murphy, Sean</name>
</author>
<id>https://hdl.handle.net/1721.1/156308</id>
<updated>2024-08-22T03:00:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Toward Cell-Specific Nanoparticle Delivery Systems
Murphy, Sean
The targetable delivery of therapeutic nanoparticles remains a significant challenge in modern medicine, particularly due to the complexity, time, and expense involved in experimental design and optimization for cell-specific applications. To address this, NOCAP (Nanoparticle Optimization and Cell Affinity Prediction) was developed, a computational framework designed to (i) predict the affinities between nanoparticles and gene expression signatures of cancer cells and (ii) optimize nanoparticle formulations for specific targets. NOCAP successfully predicts cellular affinity for previously unseen cancer cell lines. The findings demonstrate the potential of machine learning to streamline the rational selection of target-specific nanoparticle drug delivery systems, paving the way for more efficient and precise therapeutic interventions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity</title>
<link href="https://hdl.handle.net/1721.1/156305" rel="alternate"/>
<author>
<name>Xue, Zi Yu</name>
</author>
<id>https://hdl.handle.net/1721.1/156305</id>
<updated>2024-08-22T03:05:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Tailors: Accelerating Sparse Tensor Algebra by Overbooking Buffer Capacity
Xue, Zi Yu
Sparse tensor algebra is a challenging class of workloads to accelerate due to few opportunities for data reuse and varying sparsity patterns. Prior sparse tensor algebra accelerators have explored tiling sparse tensors to increase exploitable data reuse and improve throughput, but typically allocate tile size in a given buffer for the worst- case number of nonzero values in a given tile. This severely limits the utilization of available memory resources and reduces data reuse. Other accelerators employ complex tiling during preprocessing or at runtime to determine the exact tile size based on its occupancy.&#13;
&#13;
This thesis proposes a speculative tensor tiling approach, called overbooking, to improve buffer utilization by taking advantage of the distribution of nonzero elements in sparse tensors to construct larger tiles with greater data reuse at the cost of occasional instances where data overflows the buffer. To ensure correctness, it proposes a low-overhead hardware mechanism, Tailors, that can tolerate data overflow by design with reasonable data reuse and demonstrates that Tailors can be easily integrated into the memory hierarchy of an existing sparse tensor algebra accelerator. To ensure high buffer utilization with minimal cost to find a tile size, this thesis introduces a statistical approach, Swiftiles, to pick a tile size so that tiles usually fit within the buffer’s capacity, but can potentially overflow, i.e., it overbooks the buffers. Across a suite of 22 sparse tensor algebra workloads, the proposed overbooking strategy introduces an average speedup of 52.7× and 2.3× and an average energy reduction of 22.5× and 2.5× over ExTensor without and with optimized tiling, respectively.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation</title>
<link href="https://hdl.handle.net/1721.1/156303" rel="alternate"/>
<author>
<name>Vendrow, Joshua L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156303</id>
<updated>2024-08-22T03:03:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dataset Interfaces: Diagnosing Model Failures Using Controllable Counterfactual Generation
Vendrow, Joshua L.
Distribution shift is a major source of failure for machine learning models. However, evaluating model reliability under distribution shift can be challenging, especially since it may be difficult to acquire counterfactual examples that exhibit a specified shift. In this work, we introduce the notion of a dataset interface: a framework that, given an input dataset and a user-specified shift, returns instances from that input distribution that exhibit the desired shift. We study a number of natural implementations for such an interface, and find that they often introduce confounding shifts that complicate model evaluation. Motivated by this, we propose a dataset interface implementation that leverages Textual Inversion to tailor generation to the input distribution.&#13;
We then demonstrate how applying this dataset interface to the ImageNet dataset enables studying model behavior across a diverse array of distribution shifts, including variations in background, lighting, and attributes of the objects.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extensible Platforms for Bosonic Quantum Error Correction</title>
<link href="https://hdl.handle.net/1721.1/156302" rel="alternate"/>
<author>
<name>Jha, Shantanu R.</name>
</author>
<id>https://hdl.handle.net/1721.1/156302</id>
<updated>2024-08-22T03:48:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extensible Platforms for Bosonic Quantum Error Correction
Jha, Shantanu R.
Bosonic quantum error correction (QEC) encodes information in the phase space of a quantum harmonic oscillator and offers a hardware-efficient path towards faulttolerant quantum information processing. With superconducting circuits, bosonic QECusing the Gottesman-Kiteav-Preskill (GKP) encoding has been achieved using the high-Q mode of a macroscopic 3D microwave cavity controlled via fixedfrequency transmon qubits [1, 2, 3, 4, 5, 6]. To date, all previous demonstrations have been limited by bit-flips in the transmon control qubit (with typical T1 lifetimes on the order of 100 microseconds), resulting in logical lifetimes that are upper-bounded by approximately ∼ 10T1. In this thesis, we replace the transmon with a heavy-fluxonium control qubit, which has been shown to possess bit-flip lifetimes in excess of 1 millisecond [7, 8, 9, 10]. Furthermore, we propose using the asymmetrically threaded SQUID as a microwave-activated three-wave mixing coupler to yield faster GKP error-correction rates while suppressing inherited nonlinearity in our bosonic mode. As compared to direct dispersive coupling, this parametric coupling enables us to use a heavier, and therefore more bit-flip-protected, fluxonium qubit. Finally, with an accelerated error correction rate, we can use a lower-Q planar resonator to store logical quantum information in an extensible and fully 2D architecture.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Programmable Current Control of Silicon Field Emitter Arrays Using Gate-All-Around MOSFETs</title>
<link href="https://hdl.handle.net/1721.1/156301" rel="alternate"/>
<author>
<name>Sahagun, Alvaro</name>
</author>
<id>https://hdl.handle.net/1721.1/156301</id>
<updated>2024-08-22T03:36:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Programmable Current Control of Silicon Field Emitter Arrays Using Gate-All-Around MOSFETs
Sahagun, Alvaro
Silicon field emitter array (FEA) technology has great potential for applications such as electron microscopy, vacuum electronics, and X-ray sources. However, some challenges such as emitter tip burnout and spatial and temporal non-uniformity of emission current impede the adoption of FEAs in these applications. The current approach to address these challenges involves integrating a resistor, nanowire (NW) current limiter, or metal-oxide-semiconductor field-effect transistor (MOSFET) in series with the emitter tips to regulate current flow. The NW current limiter is preferred for its compact integration, which enables high emitter density in FEAs. However, it restricts the FEA versatility by constraining the emission current to a fixed maximum value. Whereas, MOSFETs provide the ability for programmable control over emission current, enabling FEA versatility. Integrating planar MOSFETs into FEAs demands significant space, leading to a notable reduction in emitter density and FEA compactness. This thesis investigates the integration of vertical gate-all-around (GAA) MOSFETs with individual emitter tips as a solution to enable programmable emission control while preserving the compactness, high emitter density, and versatility of FEAs. To achieve this, SILVACO, a device simulation platform, was used to model the GAA MOSFET, field emitter, and combined GAA MOSFET-FEA devices. The simulation results provide insight into each device's current-voltage (I-V) characteristics, identifying performance-limiting challenges such as breakdown, kinks in the I-V characteristics, and quasi-saturation of the current. Various solutions to these challenges are explored through simulations, and the resulting models show the feasibility of using a GAA MOSFET as a voltage-controlled current source in series with individual field emitter tips to program emission current.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GraphPipe: Improving the Performance and Scalability of DNN Training with Graph Pipeline Parallelism</title>
<link href="https://hdl.handle.net/1721.1/156292" rel="alternate"/>
<author>
<name>Kim, Sunghyun</name>
</author>
<id>https://hdl.handle.net/1721.1/156292</id>
<updated>2024-08-22T03:57:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">GraphPipe: Improving the Performance and Scalability of DNN Training with Graph Pipeline Parallelism
Kim, Sunghyun
Deep neural networks (DNNs) continue to grow rapidly in size, thus it is infeasible to train them on a single device. To address this challenge, current DNN training systems apply pipeline-parallel techniques. They split a DNN into multiple stages, construct a pipeline of them, and assign to each stage a distinct device. Multiple devices, each storing a partial segment of the DNN, perform their respective operations in sequence to train the whole. Applying pipeline-parallel techniques makes it feasible to train large-scale DNNs, yet there is still room for improvement. Existing approaches only consider sequential pipeline stages and thus ignore the inherent topology of a DNN to train. For example, when the architecture of a DNN has computationally-independent parallel branches, serial execution of them mandated by sequential pipeline stages unnecessarily lengthens the processing time of training data. This shortcoming leaves model-parallel opportunities untapped, resulting in suboptimal training throughput. In this paper, we develop graph pipeline parallelism (GPP), a new pipeline-parallel scheme that partitions a DNN into pipeline stages whose dependencies are identified by a directed acyclic graph. GPP generalizes current sequential pipeline stages. By constructing the pipeline based on the DNN topology, GPP enables concurrent execution of computationally independent DNN segments. GPP then optimizes micro-batch schedules for these stages, and parallelizes large-scale DNN training across multiple devices. We show that GPP achieves reduced memory consumption and improved training throughput. We also develop GraphPipe, a distributed system that leverages GPP strategies to enable performant and scalable DNN training. Evaluation on a variety of DNNs demonstrates that GraphPipe outperforms existing pipeline-parallel systems such as PipeDream and Piper by up to 1.6×. Despite the fact that GPP involves a much larger search space of parallelization strategies, GraphPipe reduces the search time by 9–21× compared to PipeDream and Piper.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LLM-Directed Agent Models in Cyberspace</title>
<link href="https://hdl.handle.net/1721.1/156291" rel="alternate"/>
<author>
<name>Laney, Samuel P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156291</id>
<updated>2024-08-22T04:03:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">LLM-Directed Agent Models in Cyberspace
Laney, Samuel P.
Network penetration testing, a proactive method for identifying vulnerabilities in cy- berspace, has long been the domain of human experts. However, rapid advancements in machine learning have opened up new possibilities for automating many of these tasks. This thesis aims to explore the application of Large Language Models (LLMs) for automating penetration tests and Cyber Capture the Flag (CTF) challenges, bridging the gap between static tools and dynamic human intuition in cybersecurity.&#13;
This work provides an evaluation framework for assessing the performance of LLMs in autonomously solving CTF challenges, with an emphasis on understanding the capabilities, limitations, and best prompting strategies for LLMs in this domain. Notably, this thesis presents an agent configuration that offers a 102% improvement in challenge completion on a database of PicoCTF challenges compared to the published baseline. By analyzing a variety of agent strategies, response formats, and historical action representations in the context of CTF challenges, this work aims to provide insights into the best practices and limitations in leveraging LLMs for cybersecurity tasks. Additionally, this work proposes a hierarchical architecture to guide an LLM-enabled agent in performing complex, multi-step penetration testing tasks with strategic foresight. This proof of concept approach shows success in entry level challenges. While LLMs exhibit impressive capabilities, they are limited out of the box in their ability to solve complex, multi-step tasks requiring exploration, necessitating approaches such as those described in this work to improve performance in these areas.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Teacher Following and Reward Maximization in Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/156290" rel="alternate"/>
<author>
<name>Shenfeld Amit, Idan</name>
</author>
<id>https://hdl.handle.net/1721.1/156290</id>
<updated>2024-08-22T03:35:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Balancing Teacher Following and Reward Maximization in Reinforcement Learning
Shenfeld Amit, Idan
Learning from rewards (i.e., reinforcement learning or RL) and learning to imitate a teacher (i.e., teacher-student learning) are two established approaches for solving sequential decision-making problems. To combine the benefits of these different forms of learning, it is common to train a policy to maximize a combination of reinforcement and teacher-student learning objectives. However, without a principled method to balance these objectives, prior work used heuristics and problem-specific hyperparameter searches to balance the two objectives. We present a principled approach, along with an approximate implementation for dynamically and automatically balancing when to follow the teacher and when to use rewards. The main idea is to adjust the importance of teacher supervision by comparing the agent’s performance to the counterfactual scenario of the agent learning without teacher supervision and only from rewards. If using teacher supervision improves performance, the importance of teacher supervision is increased and otherwise it is decreased. We will investigate the capabilities of this algorithm against strong baselines across diverse domains.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Short-visible-wavelength GHz Display</title>
<link href="https://hdl.handle.net/1721.1/156289" rel="alternate"/>
<author>
<name>Propson, Thomas C.</name>
</author>
<id>https://hdl.handle.net/1721.1/156289</id>
<updated>2024-08-22T03:53:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Short-visible-wavelength GHz Display
Propson, Thomas C.
Applications such as quantum control, manufacturing, biology, and sensing require blue and ultraviolet light modulated in space and time. We propose and implement a strategy for fast, individual, coherent control of spatially-multiplexed channels at blue and ultraviolet wavelengths by combining an integrated photonic modulator array with a strong pump beam for sum-frequency generation in a bulk nonlinear crystal. We realize a 4x4 array of amplitude-modulated spots at 420nm with a 3dB bandwidth of 2GHz.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Space-Efficient and Noise-Robust Quantum Factoring</title>
<link href="https://hdl.handle.net/1721.1/156288" rel="alternate"/>
<author>
<name>Ragavan, Seyoon</name>
</author>
<id>https://hdl.handle.net/1721.1/156288</id>
<updated>2024-08-22T03:08:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Space-Efficient and Noise-Robust Quantum Factoring
Ragavan, Seyoon
We provide two improvements to Regev's quantum factoring algorithm (arXiv:2308.06572), addressing its space efficiency and its noise-tolerance. &#13;
    &#13;
Our first contribution is to improve the quantum space efficiency of Regev's algorithm while keeping the circuit size the same. Our main result  constructs a quantum factoring circuit using O(n log n) qubits and O(n^(3/2) log n) gates. We achieve the best of Shor and Regev (upto a logarithmic factor in the space complexity): on the one hand, Regev's circuit requires O(n^(3/2)) qubits and O(n^(3/2) log n) gates, while Shor's circuit requires O(n^2 log n) gates but only O(n) qubits. As with Regev, to factor an n-bit integer N, we run our circuit independently ≈ sqrt{n} times and apply Regev's classical postprocessing procedure. &#13;
&#13;
Our optimization is achieved by implementing efficient and reversible exponentiation with Fibonacci numbers in the exponent, rather than the usual powers of 2, adapting work by Kaliski (arXiv:1711.02491) from the classical reversible setting to the quantum setting. This technique also allows us to perform quantum modular exponentiation that is efficient in both space and size without requiring significant precomputation, a result that may be useful for other quantum algorithms. A key ingredient of our exponentiation implementation is an efficient circuit for a function resembling in-place quantum-quantum modular multiplication. This implementation works with only black-box access to any quantum circuit for out-of-place modular multiplication, which we believe is yet another result of potentially broader interest. Additionally, we show how to generalize our reversible exponentiation technique beyond the Fibonacci numbers to obtain constant-factor improvements in the number of qubits and/or gates.&#13;
&#13;
Our second contribution is to show that Regev's classical postprocessing procedure can be modified to tolerate a constant fraction of the quantum circuit runs being corrupted by errors. In contrast, Regev's analysis of his classical postprocessing procedure requires all  ≈ sqrt{n} runs to be successful. In a nutshell, we achieve this using lattice reduction techniques to detect and filter out corrupt samples.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sparse Expansion and Neuronal Disentanglement</title>
<link href="https://hdl.handle.net/1721.1/156287" rel="alternate"/>
<author>
<name>Kong, Linghao</name>
</author>
<id>https://hdl.handle.net/1721.1/156287</id>
<updated>2024-08-22T03:01:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sparse Expansion and Neuronal Disentanglement
Kong, Linghao
We show how to improve the inference efficiency of an LLM by expanding it into a mixture of sparse experts, where each expert is a copy of the same weights and one-shot pruned for a specific cluster of input values. We call this approach Sparse Expansion. We show that for models like Llama 2 7B, as we increase the number of experts, Sparse Expansion outperforms all other one-shot sparsification approaches for the same FLOPs budget, and this gap grows as sparsity increases. But why? To answer this, we provide strong evidence that the mixture of sparse experts is effectively disentangling the input-output relationship of every individual neuron. Sparse experts approximate a neuron’s dense output distribution with fewer weights by decomposing the distribution into a collection of simpler ones, each with a separate sparse dot product covering it. Interestingly, we show that the Wasserstein distance between a neuron’s output distribution and a Gaussian distribution is an indicator of its entanglement level and contribution to the accuracy of the model. Every layer of an LLM has highly entangled neurons, and model performance suffers more when these are sparsified as opposed to others. We believe that these neurons may have implications beyond sparsity in understanding the performance of LLMs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Device Stack Optimization for Protonic Non-Volatile Programmable Resistors</title>
<link href="https://hdl.handle.net/1721.1/156286" rel="alternate"/>
<author>
<name>Shen, Dingyu</name>
</author>
<id>https://hdl.handle.net/1721.1/156286</id>
<updated>2024-08-22T03:11:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Device Stack Optimization for Protonic Non-Volatile Programmable Resistors
Shen, Dingyu
Analog computing could alleviate computational bottlenecks in digital deep learning systems by utilizing local information processing through the physical properties of devices, such as electrochemical ion-intercalation in three-terminal devices where channel resistance is modulated by ionic exchange via an electrolyte. Previous work has demonstrated such ionic programmable resistors featuring WO₃ as the channel, phosphorous-doped SiO₂ (PSG) as the electrolyte, Pd as the gate reservoir, and protons as the ions.  This thesis aimed to optimize the device stack in four directions and demonstrated a symmetric WO₃-PSG-WO₃ structure in a CMOS-compatible process, with the help of circular transfer length model (CTLM), which efficiently examines the resistance properties of WO₃. We have explored: (a) device protonation as part of the fabrication process, (b) encapsulation preventing proton depletion during device fabrication and operation, (c) contact metal optimization to replace gold with a CMOS-compatible material, (d) PSG evaluation vehicle for device performance optimization. The symmetric device combining all the stack optimizations features non-volatile and repeatable conductance modulation with voltage pulses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic In-memory Computing Using Magnetic Tunnel Junctions</title>
<link href="https://hdl.handle.net/1721.1/156285" rel="alternate"/>
<author>
<name>Wang, Qiuyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/156285</id>
<updated>2024-08-22T03:21:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stochastic In-memory Computing Using Magnetic Tunnel Junctions
Wang, Qiuyuan
Current computing hardware based on von Neumann architecture and digital CMOS circuits face strong challenges to further scale up for big AI models and data-centric applications. However, while being actively studied, it is still not clear which alternative computing paradigm is the best solution considering the fabrication maturity, scalability, operation conditions, cost, power/area efficiency, and so on. In this thesis, we propose a new alternative computing framework – stochastic in-memory computing using magnetic tunnel junctions. By introducing thermally stable and unstable magnetic tunnel junctions as CMOS-compatible circuit building blocks, both general-purpose and application-specific in-memory computing accelerators can be synthesized, providing a versatile and very high-efficiency hardware design framework for multiple applications. A deep learning accelerator is implemented and benchmarked on FPGA following the proposed stochastic in-memory computing architecture, with stochastic bitstreams sampled from thermally unstable magnetic tunnel junction fabricated in lab. The hardware designs for a Bayesian inference accelerator and Ising machine are also provided. Our results show magnetic tunnel junctions could open up rich design space for future computing hardware.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Worst-case Performance of Popular Approximate Nearest Neighbor Search Implementations: Guarantees and Limitations</title>
<link href="https://hdl.handle.net/1721.1/156284" rel="alternate"/>
<author>
<name>Xu, Haike</name>
</author>
<id>https://hdl.handle.net/1721.1/156284</id>
<updated>2024-08-22T03:48:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Worst-case Performance of Popular Approximate Nearest Neighbor Search Implementations: Guarantees and Limitations
Xu, Haike
Graph-based approaches to nearest neighbor search are popular and powerful tools for handling large datasets in practice, but they have limited theoretical guarantees. We study the worst-case performance of recent graph-based approximate nearest neighbor search algorithms, such as HNSW, NSG and DiskANN. For DiskANN, we show that its “slow preprocessing” version provably supports approximate nearest neighbor search query with constant approximation ratio and poly-logarithmic query time, on data sets with bounded “intrinsic” dimension. For the other data structure variants studied, including DiskANN with “fast preprocessing”, HNSW and NSG, we present a family of instances on which the empirical query time required to achieve a “reasonable” accuracy is linear in instance size. For example, for DiskANN, we show that the query procedure can take at least 0.1n steps on instances of size n before it encounters any of the 5 nearest neighbors of the query.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Capacity of Scalar Gaussian Channels Subject to State Obfuscation</title>
<link href="https://hdl.handle.net/1721.1/156283" rel="alternate"/>
<author>
<name>Lev, Omri Yaacov</name>
</author>
<id>https://hdl.handle.net/1721.1/156283</id>
<updated>2024-08-22T03:45:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On the Capacity of Scalar Gaussian Channels Subject to State Obfuscation
Lev, Omri Yaacov
The problem of communication over multiple variants of the scalar Gaussian fading channel subject to a state-obfuscation constraint imposed in the form of near independence between the channel outputs and the channel coefficients has been studied. By defining the operational capacity as the maximal achievable rate under the state obfuscation constraint, an informational counterpart is been derived, which is then proved to coincide with the operational capacity. Conditions for this capacity to be non-zero and closed-form solutions for that capacity in the high signal-to-noise ratio (SNR) limit are derived.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization</title>
<link href="https://hdl.handle.net/1721.1/156280" rel="alternate"/>
<author>
<name>Nrusimha, Aniruddha</name>
</author>
<id>https://hdl.handle.net/1721.1/156280</id>
<updated>2024-08-22T03:01:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization
Nrusimha, Aniruddha
We consider the problem of accurate quantization for language models, where both the weights and activations are quantized to 4 bits per parameter with uniform quantization, the lowest bitwidth format natively supported by existing GPU hardware. In this context, the key challenge is activation quantization: it is known that language models contain outlier channels whose values on average are orders of magnitude higher than than other channels, which prevents accurate low-bitwidth quantization with known techniques. We systematically study this phenomena and find that these outlier channels emerge early in training, and that they occur more frequently in layers with residual streams. We then propose a simple strategy which regularizes a layer’s inputs via quantization-aware training (QAT) and its outputs via activation kurtosis regularization. We show that regularizing both the inputs and outputs is crucial for preventing a model’s "migrating" the difficulty in input quantization to the weights, which makes post-training quantization (PTQ) of weights more difficult. When combined with weight PTQ, we show that our approach can obtain a W4A4 model with integer quantization that performs competitively to the standard-precision W16A16 baseline.1
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Plan and Planning to Learn in Long-Horizon Robotics Tasks</title>
<link href="https://hdl.handle.net/1721.1/156279" rel="alternate"/>
<author>
<name>Kumar, Nishanth Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/156279</id>
<updated>2024-08-22T03:33:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning to Plan and Planning to Learn in Long-Horizon Robotics Tasks
Kumar, Nishanth Jay
A longstanding goal of robotics research has been to produce a single agent capable of solving a variety of useful long-horizon tasks, such as making a cup of tea or tidying up a living room, in multiple different environments (i.e., in any household). In recent years, two dominant paradigms have emerged for constructing such a system: end-to-end model-free learning and model-based planning. Approaches from both paradigms have produced impressive isolated results, but both paradigms themselves are known to have significant limitations. Learning-based approaches often require impractical amounts of data, and struggle to generalize beyond the data and tasks they have been trained on. Planning-based approaches depend on models, which often require significant manual engineering to define, especially as the number of complexity of tasks of interest grows. This thesis proposes a set of approaches that attempt to overcome these limitations by combining aspects of both paradigms. Specifically, we leverage learning to automate the process of designing planning models, and leverage planning to efficiently and autonomously collect data needed for learning. Experiments on a variety of simulated and real-robot domains illustrate that this combination of learning to plan and planning to learn could be a promising approach to enabling robots to solve complex, long-horizon tasks at scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classical Commitments to Quantum States</title>
<link href="https://hdl.handle.net/1721.1/156278" rel="alternate"/>
<author>
<name>Villányi, Ágnes</name>
</author>
<id>https://hdl.handle.net/1721.1/156278</id>
<updated>2024-08-22T04:05:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Classical Commitments to Quantum States
Villányi, Ágnes
We define the notion of a classical commitment to quantum state scheme, which allows a quantum prover to compute a classical commitment to a quantum state and later open each qubit of the state in either the standard or Hadamard basis, while limiting communication with the verifier to a classical channel. Our scheme strengthens the notion of a measurement protocol from [Mah18], which is binding only in the standard basis. We construct our commitment scheme from the post-quantum Learning With Errors (LWE) assumption, and rely directly on any noisy trapdoor claw-free function family that satisfies the adaptive hardcore bit property first introduced in [Bra+18].
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Time Warping Constraints for Semiconductor Processing</title>
<link href="https://hdl.handle.net/1721.1/156276" rel="alternate"/>
<author>
<name>Owens, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/156276</id>
<updated>2024-08-22T03:42:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dynamic Time Warping Constraints for Semiconductor Processing
Owens, Rachel
Semiconductor manufacturing processes have become increasingly complex with the continued growth of chip manufacturing. Monitoring these processes for anomalies is crucial for maintaining quality and yield. However, a notable challenge for monitoring time series signals are the nonlinear variations in signal timing. These small, but acceptable, temporal variations are typically caused by small run-to-run differences that are inherent to the process. Dynamic time warping (DTW) can be used for temporal alignment of signals, but is computationally expensive and prone to errors.&#13;
&#13;
In this thesis, a new method is presented for preprocessing semiconductor fabrication sensor signals that improves anomaly detection model performance. The new method uses domain knowledge – specifically, process recipe step numbers – to create constraints that better align signals along the time dimension, that addresses this problem of nonlinear signal alignment. These constraints are tested on both synthetic as well as industrial datasets. The new step-constrained DTW is also extended as a distance measure for clustering time series.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Combining Channel Sounding and Guessing Random Additive Noise Decoding</title>
<link href="https://hdl.handle.net/1721.1/156274" rel="alternate"/>
<author>
<name>Millward, Jane Avril</name>
</author>
<id>https://hdl.handle.net/1721.1/156274</id>
<updated>2024-08-22T04:00:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Combining Channel Sounding and Guessing Random Additive Noise Decoding
Millward, Jane Avril
This thesis investigates how channel estimation can be used to improve the performance of Guessing Random Additive Noise Decoding. The trade-off between devoting resources to channel sounding and data transmission is investigated for pilot symbol assisted modulation schemes. Using a soft-information variant of the GRAND algorithm called Ordered Reliability Bit Guessing Random Additive Noise Decoding- Approximate Independence (ORBGRAND-AI), it is shown that by accounting for the correlation between received symbols bit and block error rate improvements can be obtained. This thesis also considers the achievable communications rate of ORBGRAND-AI when different estimators are used to provide channel estimates. Finally, this thesis investigates the use of ORBGRAND-AI in channels subjected to inter-symbol interference (ISI).
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Formally Verifying a Programmable Network Switch</title>
<link href="https://hdl.handle.net/1721.1/156273" rel="alternate"/>
<author>
<name>Liu, Jiazheng</name>
</author>
<id>https://hdl.handle.net/1721.1/156273</id>
<updated>2024-08-22T03:59:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Formally Verifying a Programmable Network Switch
Liu, Jiazheng
Programmable network switches are complex pieces of hardware that leverage nonobvious optimizations such as pipelining to offer flexible configuration interfaces. In this thesis, we propose a novel formal-verification methodology aimed at establishing strong correctness theorems for synthesizable hardware designs for network functionality, demonstrated through a case-study analysis of a Tofino-like programmable switch that we call VeriSwit. Our approach hinges on modularity, whereby the system is split into interconnected units, each equipped with its specification and proof, oblivious to the internals of other units. We conduct VeriSwit’s modular verification in the Coq theorem prover. Experiments with synthesis for both FPGA and ASIC targets, combined with simulation, show that 100 GB/s line rate is easily achieved.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classroom model of an information and computing system.</title>
<link href="https://hdl.handle.net/1721.1/156249" rel="alternate"/>
<author>
<name>Schroeder, Michael David.</name>
</author>
<id>https://hdl.handle.net/1721.1/156249</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1969-01-01T00:00:00Z</published>
<summary type="text">Classroom model of an information and computing system.
Schroeder, Michael David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1969; Bibliography: leaves 215-216.
</summary>
<dc:date>1969-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Transient currents of polyphase induction motors with a single-phase supply</title>
<link href="https://hdl.handle.net/1721.1/156246" rel="alternate"/>
<author>
<name>Lee, Tze-Chang.</name>
</author>
<id>https://hdl.handle.net/1721.1/156246</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1927-01-01T00:00:00Z</published>
<summary type="text">Transient currents of polyphase induction motors with a single-phase supply
Lee, Tze-Chang.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1927; Includes bibliographical references (leaf 80).
</summary>
<dc:date>1927-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation of the effect of steel reinforcement on the moments in a reinforced concrete cellular type bridge</title>
<link href="https://hdl.handle.net/1721.1/156242" rel="alternate"/>
<author>
<name>Cantono, William Paul.</name>
</author>
<id>https://hdl.handle.net/1721.1/156242</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">An investigation of the effect of steel reinforcement on the moments in a reinforced concrete cellular type bridge
Cantono, William Paul.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1932
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Prevention of hydrogen cracking in HY-80 welds</title>
<link href="https://hdl.handle.net/1721.1/156238" rel="alternate"/>
<author>
<name>Biederka, John William.</name>
</author>
<id>https://hdl.handle.net/1721.1/156238</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1983-01-01T00:00:00Z</published>
<summary type="text">Prevention of hydrogen cracking in HY-80 welds
Biederka, John William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1983; Includes bibliographical references.
</summary>
<dc:date>1983-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laboratory investigation of the variable nature of the blue clay layer below Boston</title>
<link href="https://hdl.handle.net/1721.1/156236" rel="alternate"/>
<author>
<name>Albin, Pedro.</name>
</author>
<id>https://hdl.handle.net/1721.1/156236</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Laboratory investigation of the variable nature of the blue clay layer below Boston
Albin, Pedro.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1947; Bibliography: leaf 64.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Second approximations to the solution of laminar boundary layer flow along a flat plate</title>
<link href="https://hdl.handle.net/1721.1/156234" rel="alternate"/>
<author>
<name>Alden, Henry L.</name>
</author>
<id>https://hdl.handle.net/1721.1/156234</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">Second approximations to the solution of laminar boundary layer flow along a flat plate
Alden, Henry L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1947; Bibliography: leaf 29.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of translation-rotation coupling on helicopter ground resonance</title>
<link href="https://hdl.handle.net/1721.1/156233" rel="alternate"/>
<author>
<name>Amer, Kenneth B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156233</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1947-01-01T00:00:00Z</published>
<summary type="text">The effect of translation-rotation coupling on helicopter ground resonance
Amer, Kenneth B.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1947; Bibliography: leaf 27.
</summary>
<dc:date>1947-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bosonic Quantum Error Correction with a Heavy Fluxonium Control Qubit</title>
<link href="https://hdl.handle.net/1721.1/156166" rel="alternate"/>
<author>
<name>Chowdhury, Shoumik</name>
</author>
<id>https://hdl.handle.net/1721.1/156166</id>
<updated>2024-08-15T03:01:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Bosonic Quantum Error Correction with a Heavy Fluxonium Control Qubit
Chowdhury, Shoumik
Bosonic codes store information in the phase space of a quantum harmonic oscillator and offer a hardware‐efficient path towards quantum error correction (QEC), requiring only an oscillator and an auxiliary qubit for measurement and universal control. Of the many bosonic codes, the so‐called Gottesman‐Kitaev‐Preskill (GKP) code stands out as one of the most robust to dominant physical decoherence mechanisms, but is severely limited by bit‐ flip errors in the control qubit. In this thesis, we develop a new approach for implementing GKP QEC in superconducting circuits based on using a heavy fluxonium as the auxiliary control qubit due to its inherent bit‐flip protection. We demonstrate progress towards this in experiment by using a fluxonium in a 3D superconducting cavity architecture, and also propose novel strategies for moving future experiments to a fully 2D platform.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motion Planning along Manifolds with Geodesic Convexity and Analytic Inverse Kinematics</title>
<link href="https://hdl.handle.net/1721.1/156165" rel="alternate"/>
<author>
<name>Cohn, Thomas B.</name>
</author>
<id>https://hdl.handle.net/1721.1/156165</id>
<updated>2024-08-15T03:41:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Motion Planning along Manifolds with Geodesic Convexity and Analytic Inverse Kinematics
Cohn, Thomas B.
Collision-free motion planning is a fundamental problem in robotics. Most motion planning algorithms operate in the configuration space of a robot, where each dimension corresponds to an individual degree of freedom. Oftentimes, these configuration spaces can be viewed as Euclidean spaces, and many motion planning algorithms treat them as such. However, many configuration spaces of interest are inherently non-Euclidean, including those of mobile robots, robot arms that have revolute joints without limits or ball joints, and flying robots, as well as the constrained configuration spaces that arise when planning with task-space constraints. In this thesis, we treat the problem of motion planning along Riemannian manifolds, a broader class of spaces that encompasses many of the problems of interest.&#13;
&#13;
In the first chapter, we present a generalization of the graph of convex sets (GCS) planning framework that can handle smooth manifolds. GCS uses convex optimization, and is thus restricted to Euclidean configuration spaces. Our analysis utilizes geodesic convexity to achieve the same guarantees on Riemannian manifolds, and we leverage this to produce motion plans for mobile robots whose arms have unbounded revolute joints.&#13;
&#13;
In the second chapter, we specifically consider the problem of constrained bimanual manipulation, where a robot has to move an object that is being grasped with two hands. The set of kinematically-valid configurations is a union of submanifolds, implicitly defined by nonlinear equality constraints This presents significant challenges for standard unconstrained planning algorithms. We construct a smooth parametrization of the feasible set, recasting the problem without equality constraints. Our approach is algorithm-agnostic, and we demonstrate that unconstrained planners (working through the parametrization) produce favorable results.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Boston Night Owl: A Framework for Introducing Overnight Bus Service That Can Close Significant Spatiotemporal Gaps in Greater Boston's Transit System</title>
<link href="https://hdl.handle.net/1721.1/156164" rel="alternate"/>
<author>
<name>Barrett, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/156164</id>
<updated>2024-08-15T03:31:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Boston Night Owl: A Framework for Introducing Overnight Bus Service That Can Close Significant Spatiotemporal Gaps in Greater Boston's Transit System
Barrett, Gabriel
There are people traveling at every hour of the day. Cities by their nature function throughout the 24-hour day, however the same is not always true of their transit systems. Just as in the day time, overnight public transportation exists to provide mobility access to the people who need or choose to travel at night. This thesis explores the first steps in developing an overnight transit service in a region where it does not currently exist, using the Boston area as a case study. This is done through a two-step process: first, identifying where and when the service should be run, and second, learning from existing overnight systems around the world to understand how the service should operate. As part of the method, the thesis proposes a novel approach to identifying areas with acute disparity between transit supply and demand, colloquially known as “transit deserts,” that involves taking into account how these factors change both spatially and temporally. The end result of this thesis is a framework that planners in cities and transit agencies can use when creating a system that can close these gaps. This is an approach that planners will find useful not just in planning night time service, but for planning service at all times of the day.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeding Trust, Sustaining Equity: Funding and Financing Relationships in the Greater Boston Community Land Trust Network</title>
<link href="https://hdl.handle.net/1721.1/156163" rel="alternate"/>
<author>
<name>Aibinder, Sammi</name>
</author>
<id>https://hdl.handle.net/1721.1/156163</id>
<updated>2024-08-15T03:32:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Seeding Trust, Sustaining Equity: Funding and Financing Relationships in the Greater Boston Community Land Trust Network
Aibinder, Sammi
Interest in community land trusts (CLTs) as one tool for stable, affordable housing and local autonomy over urban planning processes is growing rapidly—particularly in the past decade, in the wake of the subprime mortgage crisis and the destruction of wealth and housing security that foreclosure waves wreaked across the United States. This increasing energy for community ownership and stewardship of land and housing spans grassroots organizing networks; local, state, and federal government authorities; and philanthropic and conventional capital. Though such a broad base of interest in CLTs at both local and national levels is encouraging, CLT organizers continue to struggle within dominant affordable housing policies and practices to sustain their work. As CLTs and their advocates push to reshape public budgets and capture private capital in innovative ways, how do funders and lenders relate to their own role in ceding control over land and housing—and the financial wealth they generate—in ways that share power with the residents and organizers at the heart of these housing justice movements? Drawing on interviews with housing and community development finance professionals, ongoing conversations with CLT practitioners and advocates, and policy research, this thesis explores the funding and financing ecosystem surrounding the Greater Boston Community Land Trust Network (GBCLTN) as a descriptive case study.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference Plans for Hybrid Probabilistic Inference</title>
<link href="https://hdl.handle.net/1721.1/156162" rel="alternate"/>
<author>
<name>Cheng, Ellie Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156162</id>
<updated>2024-08-15T03:49:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inference Plans for Hybrid Probabilistic Inference
Cheng, Ellie Y.
Advanced probabilistic programming languages (PPLs) use hybrid inference systems to combine symbolic exact inference and Monte Carlo sampling to improve inference performance. These systems use heuristics to partition random variables within the program into variables that are represented symbolically and variables that are represented by sampled values, and in general, they make no guarantee that the partitioning is optimal. In this thesis, I present inference plans, a programming interface that enables developers to choose a specific partitioning of random variables during hybrid inference. I further present Siren, a new PPL that enables developers to use annotations to specify inference plans. To assist developers with statically reasoning about whether an inference plan can be implemented, I present an abstract-interpretation-based static analysis for Siren for determining inference plan satisfiability, and prove the analysis is sound with respect to Siren's semantics. In our evaluation, the results show that custom inference plans can produce up to ~1000x better accuracy compared to the default heuristics. They further show that the static analysis is precise in practice, identifying all satisfiable inference plans in 6 out of 7 benchmarks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Generative Models for 3D Molecular Structures</title>
<link href="https://hdl.handle.net/1721.1/156161" rel="alternate"/>
<author>
<name>Daigavane, Ameya</name>
</author>
<id>https://hdl.handle.net/1721.1/156161</id>
<updated>2024-08-15T04:03:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving Generative Models for 3D Molecular Structures
Daigavane, Ameya
Generative models have recently emerged as a promising avenue for navigating the high-dimensional space of molecular structures. Such models must be designed carefully to respect the rotation and translation symmetries of molecules. In this thesis, we first provide an overview of existing methods and techniques in this rapidly developing field. Next, we present Symphony, an&#119864;(3)-equivariant autoregressive generative model for 3D molecular geometries that iteratively builds a molecule from molecular fragments, improving upon existing autoregressive models for molecule generation and approaching the performance of diffusion models. The material in this thesis is primarily sourced from the publication “Symphony: SymmetryEquivariant Point-Centered Spherical Harmonics for 3D Molecule Generation" [13] authored by Ameya Daigavane, Song Kim, Mario Geiger and Tess Smidt, and published at the International Conference on Learning Representations (ICLR), 2024.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AC Optimal Power Flow for Physically and Economically Informed Grid Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/156160" rel="alternate"/>
<author>
<name>Anton, Laurentiu Lucian</name>
</author>
<id>https://hdl.handle.net/1721.1/156160</id>
<updated>2024-08-15T03:31:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">AC Optimal Power Flow for Physically and Economically Informed Grid Decarbonization
Anton, Laurentiu Lucian
Current practices for power systems operations and planning rely on an approximate optimal power f low formulation known as DC Optimal Power Flow (DC OPF). Results from DC OPF are not implementable, requiring feasibility checks and adjustments from AC power flow analysis. Current methodologies do not guarantee convergence, feasibility, or robustness, and they rely heavily on operator knowledge and intervention. This work uses AC Optimal Power Flow (AC OPF) to directly obtain feasible dispatch signals that guide enhanced grid operations and planning within the context of Puerto Rico’s grid decarbonization efforts. A comprehensive application of AC OPF is used to assess the robustness of the Puerto Rican power grid and explore an array of scenarios involving the retirement of existing generating assets and integration of solar PV. A public model was assessed by analysing of several operational equilibria obtained via economic dispatch and loss minimization. Additionally, a Jacobian-based N-1 screening was performed, identifying critical contingencies requiring corrective actions. These insights, as well as considerations from the Puerto Rico 100 Study and PREPA’s 10Year Plan, were used to assess the deployment of potential solar assets at various stages of retirement for the San Juan, Palo Seco and Aguirre assets, in that order. Results provided locational, quantitative, and timely insights into optimal deployment strategies that align with Puerto Rico’s decarbonization goals. The findings confirm the ability for Puerto Rico to transition to a high-renewable deployment scenario, and provide guidance on where to strategically incentivize renewable deployment and reactive power support, in what quantities, and in response to which generator retirements.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Blocks of a Just Transition: Green Banks and Residential Building Decarbonization in New York</title>
<link href="https://hdl.handle.net/1721.1/156159" rel="alternate"/>
<author>
<name>Downing, Lia</name>
</author>
<id>https://hdl.handle.net/1721.1/156159</id>
<updated>2024-08-15T03:11:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building Blocks of a Just Transition: Green Banks and Residential Building Decarbonization in New York
Downing, Lia
The existential threat of climate change has given rise to financial solutions aimed at transitioning global systems away from fossil fuels and towards clean energy. Green banks are one such solution as a specialty finance vehicle aimed at using public funds to induce private investment in climate energy projects such as residential building decarbonization. Given the recent increased investment and policy attention on green banks, we should assess whether the green bank model delivers their professed goals of socially equitable outcomes, market creation, and greenhouse gas emission reductions in line with Net Zero national policy.&#13;
This thesis seeks to understand the political and organizational dynamics of green bank models in the context of the Inflation Reduction Act and identify the existing project deployment gaps remaining for residential building decarbonization projects. Through a case study approach of New York Green Bank and New York Energy Efficiency Corporation, this study investigates green bank 1) additionality; 2) organizational structure; 3) scale; and 4) demand as considerations for green bank formulation to drive building decarbonization investments. These case studies combined with expert interviews provide strategy and programmatic recommendations for policymakers considering whether to create or expand a green bank in the wake of massive federal investment through the Inflation Reduction Act.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shaping the Future Amid Decline: Integrative Strategies for Aging Koreans and Migrant Workers in South Korea’s Shrinking Regions</title>
<link href="https://hdl.handle.net/1721.1/156158" rel="alternate"/>
<author>
<name>Kim, MinJi</name>
</author>
<id>https://hdl.handle.net/1721.1/156158</id>
<updated>2024-08-15T03:41:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Shaping the Future Amid Decline: Integrative Strategies for Aging Koreans and Migrant Workers in South Korea’s Shrinking Regions
Kim, MinJi
This thesis investigates the intricate dynamics between aging Korean populations and foreign migrant workers in South Korea’s shrinking regions. By conducting an in-depth analysis of four cities, each representing a unique aspect of the nation's projected demographic shifts, this study evaluates how urban planning and policy can foster resilient communities amidst significant societal changes. Utilizing a mixed-methods approach, which includes quantitative data alongside interviews and surveys with 81 stakeholders—from local officials to migrants and elderly residents—the research uncovers complex relationships and systemic barriers that impact community cohesion and demographic stability. The findings provide a nuanced perspective on how strategic urban design and innovative policy initiatives can drive transformative growth in these areas, turning demographic challenges into opportunities for development. The analysis highlights the untapped potential within vulnerable populations and recommends a series of interventions, including integrating educational elements into urban infrastructure and promoting cultural inclusivity through diverse partnerships. This approach seeks to reinvigorate shrinking regions, transforming them into vibrant, sustainable communities. Ultimately, the study underscores the critical role of inclusive urban development in revitalizing areas facing demographic and economic decline.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual AI for Sustainable Urban Development Computer Vision and Machine Learning Applications for Climate and Social Impact</title>
<link href="https://hdl.handle.net/1721.1/156157" rel="alternate"/>
<author>
<name>Schrage, Leonard</name>
</author>
<id>https://hdl.handle.net/1721.1/156157</id>
<updated>2024-08-15T03:45:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Visual AI for Sustainable Urban Development Computer Vision and Machine Learning Applications for Climate and Social Impact
Schrage, Leonard
The surge in interest in Artificial Intelligence (AI)—driven by recent advancements—has sparked widespread discourse across various sectors, reflecting mixed reactions of fascination and concern. This thesis focuses on Visual AI, critically analysing the technology’s potential to promote sustainable urban development. Presenting and evaluating three case studies that employ computer vision and machine learning in urban planning contexts, the research highlights the potential of Visual AI in enhancing urban complexity understanding and decision-making to mitigate the built environment’s immense carbon footprint and social shortcomings, whilst cautioning against the technology's ability to exacerbate current urban development issues. The projects—Urban Ingredients, City Aesthetics, and Million Neighborhoods: Reblocking—demonstrate three different approaches to using Visual AI for climate and social impact. The case studies subjects include generating global material stock data, analysing the correlation between facade geometries and urban health, and the scaling of parcel data generation for informal settlements. The thesis reflects on the limitations, impacts, and risks of the presented projects and offers a vision for future research aimed at achieving circular, regenerative, and equitable urban environments at scale.&#13;
Keywords&#13;
Visual Computing, Artificial Intelligence, Computer Vision, Machine Learning, AI Ethics, Urban Science, Climate Change, Equitable Cities, Urban Mining, Circular Economy, Architectural Neuroaesthetics, Facade Patterns, Parcelization, Reblocking, Informal Settlements
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Confronting Glacial Hazards: A Study of Disaster Impact and Community Adaptation to Glacial Lake Outburst Floods in Hunza, Pakistan</title>
<link href="https://hdl.handle.net/1721.1/156156" rel="alternate"/>
<author>
<name>Shahid, Misha</name>
</author>
<id>https://hdl.handle.net/1721.1/156156</id>
<updated>2024-08-15T03:32:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Confronting Glacial Hazards: A Study of Disaster Impact and Community Adaptation to Glacial Lake Outburst Floods in Hunza, Pakistan
Shahid, Misha
As climate change hastens glacial retreat across the world, high mountain communities face increasing risks from glacial lake outburst floods (GLOFs) with limited capacity to mitigate their impact and recover from repeated cycles of losses. This thesis looks at the Hassanabad settlement in Hunza, Pakistan which has faced five GLOF occurrences between 2019 and 2022 to study impact from the f loods, identify differential vulnerability within the settlement and evaluate local adaptation strategies. The study uses a combination of spatio-temporal analysis as well as qualitative field research. Findings indicate that areas closest to the Hassanabad ‘nullah’ (ravine) have suffered immensely from land losses through erosion and continue to be vulnerable to potential occurrences in the future. Field research in Hassanabad proves that community-led disaster risk management (CBDRM) efforts have been central to protecting local residents from the impact of these occurrences. In order to find solutions to the risks facing Hassanabad, the thesis presents five approaches to adaptation that link the remote sensing and community based findings within the region to assess realistic options for the settlement’s future. These include engineering centric solutions such as lake-level lowering and infrastructural adaptation, non-structural efforts such as the deployment of early warning systems (EWS) and community centric approaches that emphasize the role of community-based disaster risk management (CBDRM) and potential relocation of residents to a less risk-prone area.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Impacts of Boston’s Fare Free Bus Route on Urban Mobility Behavior: A Framework for Causal Analysis</title>
<link href="https://hdl.handle.net/1721.1/156155" rel="alternate"/>
<author>
<name>Then, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/156155</id>
<updated>2024-08-15T03:09:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating the Impacts of Boston’s Fare Free Bus Route on Urban Mobility Behavior: A Framework for Causal Analysis
Then, Eva
Since the onset of the COVID-19 pandemic, public transit ridership in the US has faced significant challenges in returning to pre-pandemic levels. Nearly $70 billion in federal relief funding has been allocated to transit departments nationwide, with major US cities using these funds to implement free transit programs. Boston, the focal point of the thesis, launched a fare free bus pilot in August 2021 which has been extended multiple times and is set to continue until 2026. The Fare Free Program has seen encouraging results, marked by increased ridership, cost savings for passengers, and reduced dwell times. The following research leverages large-scale mobility data in an effort to gain deeper insights into the impact of the fare free policy, centering its analysis on Route 28. Employing the tools of causal inference, it offers a valuable resource for planners, policymakers, and scientists seeking to analyze the effect of policy interventions through the lens of big data. Drawing on anonymized, large-scale GPS data from mobile phone users in the Boston area, the research introduces a comprehensive framework for evaluating the impact of Boston’s Fare Free Program on urban mobility behavior, expanding research beyond the scope of transit data and surveys used by the City.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Vulnerability: Harmonizing Disaster Risk Reduction and Management with Socio-spatial Construction of Risk in Post-Tsunami Aceh</title>
<link href="https://hdl.handle.net/1721.1/156154" rel="alternate"/>
<author>
<name>Ramadani, Muhammad Rizki Rayani</name>
</author>
<id>https://hdl.handle.net/1721.1/156154</id>
<updated>2024-08-15T03:35:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Navigating Vulnerability: Harmonizing Disaster Risk Reduction and Management with Socio-spatial Construction of Risk in Post-Tsunami Aceh
Ramadani, Muhammad Rizki Rayani
How can a city that once suffered the world's deadliest tsunami prepare for future disasters? This thesis is a collection of stories from those who have historically been considered “unwanted, powerless, and marginalized” due to multi-tiered and differentiated citizenship. It examines the case of Banda Aceh in Indonesia nearly two decades after a devastating earthquake and tsunami that wiped out a third of its population, and a peace agreement that ended three decades of violent conflict. The question posed is: Does the narrative of Build Back “Better” remain relevant in representing the context of long-term development? &#13;
&#13;
This study primarily aims to deconstruct the logic of disaster risk reduction and management (DRRM) and territorial planning, which is rational and techno-scientific, built upon post-colonial relation networks. Through historical comparative analysis, the case of three coastal neighborhoods, also known as “gampong”, reveals the limitations of this approach. It does not necessarily reduce vulnerability. Instead, it intensifies it through a systemic process of “vulnerabilization” (Lamb and Vale, 2024 [forthcoming]), utilizing the logic of sacrifice and necropolitics (Mbembe, 2002), and further reinforcing "quasi-citizenship," where institutions with limited capabilities deny basic rights to marginalized communities. This thesis emphasizes that a disaster is not merely a natural hazard—it is an interaction with vulnerability, a state that is institutionally, historically, politically, ideologically, and spatially produced (Wisner, 2004). &#13;
&#13;
As a result, this study encourages reevaluating disaster risk reduction and management, specifically incorporating post-colonial critiques into theory-building. It proposes shifting away from universal models favoring high modernism or progress and advocates for a balanced approach that genuinely focuses on “the people”. Thus, this thesis advocates for a new methodology for closer relations in addressing affect, lived experience, and historical analysis in planning as legitimate ways of knowing. Acknowledging trauma, collective memory, and spatial expressions of belonging as valid forms of capabilities for disaster risk reduction and management is a crucial step to actualize equitably resilient cities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Deep Learning Models of Metabolism</title>
<link href="https://hdl.handle.net/1721.1/156153" rel="alternate"/>
<author>
<name>Chinn, Itamar</name>
</author>
<id>https://hdl.handle.net/1721.1/156153</id>
<updated>2024-08-15T03:27:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Deep Learning Models of Metabolism
Chinn, Itamar
Enzymes play a critical role in catalyzing the chemical reactions that underpin metabolic processes in living organisms. Despite their importance, a vast majority of enzymes remain uncharacterized, limiting our understanding of their potential roles in metabolism and disease. This thesis aims to address this gap by leveraging recent advancements in protein and molecular modeling to predict the outcomes of enzymatic reactions and identify functions of unannotated enzymes. Two key contributions are highlighted. Firstly, a graph-based forward synthesis prediction model is introduced, which relies only on the molecular structure of the substrates and the enzyme’s primary sequence. By capturing the biochemical interaction between enzyme residues and substrate atoms, the model achieves better generalization to new chemistry, demonstrating significant improvements in predicting unseen products and showcasing its potential for drug metabolism prediction. The second contribution is CLIPZyme, a contrastive learning method for virtual enzyme screening that frames the task of identifying enzymes catalyzing a reaction of interest as a retrieval problem. CLIPZyme outperforms the baseline approach of screening enzymes via their enzyme commission (EC) number. The combination of CLIPZyme with EC prediction consistently yields improved results over either method alone. Both of these contributions aim to provide the initial building blocks to model entire complex metabolic networks with downstream applications including metabolic engineering and drug discovery.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large-Signal Characterization of Piezoelectric Resonators for Power Conversion</title>
<link href="https://hdl.handle.net/1721.1/156152" rel="alternate"/>
<author>
<name>Jackson, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/156152</id>
<updated>2024-08-15T03:33:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Large-Signal Characterization of Piezoelectric Resonators for Power Conversion
Jackson, Amanda
Magnetics are key components of conventional power converters, but they are often the bottleneck to achieving high power density due to their size, weight, and poor performance at small sizes. Piezoelectric devices, when operated in their inductive regime, can serve a purpose similar to that of magnetic components and offer favorable scaling properties as components are miniaturized. Several sources have demonstrated the viability of piezoelectric-based power converters, but selection of the optimal material and component size is limited by a lack of data on the performance of these materials at high drive levels. This work aims to fill that gap by collecting data to more completely characterize the losses in piezoelectric resonators owing to both mechanical and dielectric effects. To account for mechanical losses, the variation in resonator quality factor is examined across a range of drive levels for multiple resonator sizes, frequencies, and materials. By normalizing the collected data, material trends are derived that can predict mechanical losses under high drive levels, offering more insight into realistic converter operation than the currently available small-signal data sheet values. Additionally, a method for measuring high-power dielectric loss is presented, with results showing that the small-signal loss tangent provides a good approximation of losses even at higher drive levels. Based on these trends, implications for converter efficiency and selection of material and dimensions are discussed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perspectives on Power: Characterizing Public Perceptions Towards Large-Scale Renewable Energy Development in the United States</title>
<link href="https://hdl.handle.net/1721.1/156151" rel="alternate"/>
<author>
<name>Chaudhuri, Anushree</name>
</author>
<id>https://hdl.handle.net/1721.1/156151</id>
<updated>2024-08-15T03:51:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Perspectives on Power: Characterizing Public Perceptions Towards Large-Scale Renewable Energy Development in the United States
Chaudhuri, Anushree
With rapid growth in renewable energy development expected in the coming decades, it is crucial to ensure the clean energy transition does not perpetuate past injustices in infrastructure siting. In this thesis, I first contextualize historical renewable energy siting policies and patterns in the U.S., as well as current siting policies and debates. Then, I use a mixed-methods approach to characterize community sentiment towards large-scale renewable energy projects. First, I create a database of online narratives surrounding local renewable energy siting disputes using a large language model (LLM). This method analyzes online media, e.g., newspaper articles, public and legal proceedings, and social media, to quantify types of opposition sentiment. My analysis reveals that both wind and higher capacity projects are correlated with greater quantified measures of opposition, as scored by an LLM. To contextualize these national-level quantitative findings, I also conduct case studies of two ongoing siting disputes in California. I use interviews, focus groups, and participatory methods to better understand local context and analyze how recent state preemption of local siting authority affects public perceptions. Stakeholders focus on place-based factors overlooked in national analysis and express a desire for neutral joint fact-finding processes. Finally, I evaluate a university-based clinical model piloted at MIT for proactive stakeholder assessment and joint problem-solving to improve energy justice outcomes in renewable energy siting. Preliminary findings show the clinical approach increases participation of previously underrepresented groups, builds trust between stakeholders compared to a typical siting process, and expands experiential learning opportunities. Ultimately, this thesis suggests that a combination of large-scale empirical research paired with a site-specific clinical approach could enable a more equitable and efficient energy transition.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Salvemos Barranco: Contested visions for the city and transportation in Barranco, Lima, Peru</title>
<link href="https://hdl.handle.net/1721.1/156150" rel="alternate"/>
<author>
<name>Herndon, Marco Leonardo</name>
</author>
<id>https://hdl.handle.net/1721.1/156150</id>
<updated>2024-08-15T03:30:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Salvemos Barranco: Contested visions for the city and transportation in Barranco, Lima, Peru
Herndon, Marco Leonardo
Densely populated cities like Lima, Peru, face a complex challenge: integrating mass transit into established urban fabrics. This thesis explores this tension through the case of a World Bank-funded Bus Rapid Transit (BRT) system implemented in Lima in 2010. The BRT, built mostly on an exclusive highway corridor, traversed only three neighborhoods–including Barranco, a historic district. Despite promising citywide mobility improvements, the project sparked protests in Barranco due to concerns about reduced pedestrian access, historic preservation, and potential neighborhood segregation. Through historical and spatial analysis, this thesis examines the claims of both residents and stakeholders to understand the root cause of the conflict and propose improved planning processes. The research reveals significant gaps between the planning process and resident concerns, resulting in reduced pedestrian space and unintended traffic impacts. In response, the thesis proposes a three-pronged approach for future World Bank BRT projects: 1) prioritizing local capacity building for meaningful public participation, 2) achieving a balance between city-wide accessibility and neighborhood concerns, and 3) implementing a community-based BRT evaluation framework. The study concludes by offering an opportunity for the World Bank to facilitate a reparative planning process in Barranco, centering residents as decision-makers in shaping their transportation future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redignifying LaVilla: Visualizing and Recentering Black Epistemologies in the Revitalization of LaVilla, Jacksonville, Florida</title>
<link href="https://hdl.handle.net/1721.1/156149" rel="alternate"/>
<author>
<name>Harris, Journee</name>
</author>
<id>https://hdl.handle.net/1721.1/156149</id>
<updated>2024-08-15T03:35:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Redignifying LaVilla: Visualizing and Recentering Black Epistemologies in the Revitalization of LaVilla, Jacksonville, Florida
Harris, Journee
There is a need and desire for planners and designers to atone for racism and white supremacy in the field, and Reparative Planning as a theory and practice is a start. This thesis looks at recent revitalization efforts in LaVilla, a historic African-American neighborhood situated in Downtown Jacksonville, Florida as an example of reparative planning, with specific interest around the upcoming Lift Ev’ry Voice and Sing Park. The creation of Lift Ev’ry Voice and Sing Park signals a pivotal moment for Black Landscapes in the US South in which the City of Jacksonville is looking to use public space to acknowledge and preserve local Black history. As the downtown area transforms, there is a need for grounding revitalization in a reparative process that is informed by lived experience and local expertise. Drawing upon methods such as unstructured interviews, archival research, and visual inquiry, this thesis proposes scrapbooking as an innovative approach to activating archives and visualizing Black Epistemologies within the urban planning context. At the core of this project lies the argument that Black Epistemologies represent a legitimate expertise that is missing from revitalization efforts. Planners and other practitioners engaged in anti-racist, reparative work should embrace these epistemologies as a valuable resource to inform their understanding of the built environment from distinct cultural and historical perspectives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Story of a Towel: A Comprehensive Approach to Disaster Preparedness: Enhancing Inclusivity and Sustainability in Chile's Emergency Disaster Kits</title>
<link href="https://hdl.handle.net/1721.1/156148" rel="alternate"/>
<author>
<name>Letelier, Ana A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156148</id>
<updated>2024-08-15T03:57:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Story of a Towel: A Comprehensive Approach to Disaster Preparedness: Enhancing Inclusivity and Sustainability in Chile's Emergency Disaster Kits
Letelier, Ana A.
This thesis explores the redesign of disaster relief kits in Chile using a methodology known as the Comprehensive Initiative on Technology Evaluation (CITE). Since the last update in 2017, the disaster kits in Chile are now set to be revised in 2024 due to a new agreement within the government. This presents an opportunity to redesign the kits using the CITE methodology to better meet the needs of the end-users. This thesis collaborates with the Chilean government to demonstrate how the kits should be redesigned to be more gender-inclusive and sustainable, reflecting the views of communities who participated in focus groups and surveys conducted for this study. The thesis underscores the importance of consulting with communities to understand their real needs and challenges, which is crucial for designing kits that truly serve those most in need after a disaster. It also highlights the significance of incorporating a gender perspective into disaster management methodologies and research. Ultimately, the redesigned kits include products that are more sustainable and gender-inclusive, and recommendations are provided on how the government can enhance its inclusivity and waste management practices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Seeking Relief in the City: An Examination of Planning in Karachi to Support Internally Displaced People after the 2022 Floods in Pakistan</title>
<link href="https://hdl.handle.net/1721.1/156147" rel="alternate"/>
<author>
<name>Shad, Daud</name>
</author>
<id>https://hdl.handle.net/1721.1/156147</id>
<updated>2024-08-15T03:04:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Seeking Relief in the City: An Examination of Planning in Karachi to Support Internally Displaced People after the 2022 Floods in Pakistan
Shad, Daud
The 2022 monsoon-season floods in Pakistan caused widespread devastation, resulting in millions of internally displaced people (IDPs) facing difficult choices. Karachi, the country’s largest city and capital of Sindh province, was the destination for tens of thousands of evacuees. Many of these rural-urban migrants ended up in relief camps lacking basic facilities and services. Although the government aimed to address the acute crisis of IDPs entering the city and promote rural rehabilitation, there was minimal accounting for those seeking longer-term support such as resettlement. Still, thousands of IDP households have chosen to stay in Karachi as return has seemed neither safe nor economically feasible. My research – based on key stakeholder interviews and site visits – examines the planning process to accommodate the short- and long-term shelter needs of IDPs who arrived in the city after the floods. It considers the impact of uncertainty on the affected population as well as the critical role of civil society in addressing the crisis. As climate change is exacerbating forced migration, how can the humanitarian response to support IDPs in a megacity like Karachi be more equitable and sustainable? This research recommends that key actors in Karachi plan for a comprehensive and flexible array of shelter and settlements programming to meet the various needs of people after disaster displacement. Additionally, IDPs in 2022 could have been better served through more accessible information on housing and coordination across relief sites. Adopting such measures may decrease the uncertainty inherent in humanitarian response and advance urban planning in assisting populations devastated by circumstances beyond their control.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Photonic probabilistic machine learning using quantum vacuum noise</title>
<link href="https://hdl.handle.net/1721.1/156146" rel="alternate"/>
<author>
<name>Choi, Seou</name>
</author>
<id>https://hdl.handle.net/1721.1/156146</id>
<updated>2024-08-15T03:47:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Photonic probabilistic machine learning using quantum vacuum noise
Choi, Seou
Probabilistic machine learning is an emerging paradigm which harnesses controllable random sources to encode uncertainty and enable statistical modeling. The pure randomness of quantum vacuum noise, fluctuation of electromagnetic fields even in the absence of a photon, has been utilized for high speed and energy-efficient stochastic photonic elements. Nevertheless, the experimental demonstration of photonic probabilistic computing hardware has remained elusive so far, due to the lack of programmable stochastic optical elements which can implement probabilistic machine learning algorithms. Here, we implement a photonic probabilistic computer consisting of a programmable stochastic photonic element, which we refer to as a photonic probabilistic neuron (PPN). We implement this PPN using a biased optical parametric oscillator, which utilizes quantum vacuum noise to generate a tunable probability distribution controlled by a bias field. We then implement a measurement-and feedback scheme for time-multiplexed PPNs in electronic processors (FPGA or GPU) to solve certain probabilistic machine learning tasks. We showcase how we can encode probabilistic behavior in two representative models of machine learning, discriminative and generative models, by showcasing probabilistic inference and image generation of MNIST-handwritten digits. While solving these probabilistic machine learning tasks, quantum vacuum noise works as a random source which can encode classification uncertainty in inference and enable probabilistic generation of samples. Furthermore, we propose a path toward an all-optical probabilistic computing platform. We estimate the sampling rate of the PPN as ∼ 1 Gbps and energy consumption as ∼ 5 fJ/MAC. Our work paves the way for scalable, ultrafast, and energy-efficient probabilistic machine learning hardware.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Black Collective Memory as Economic Development Practice: Resistance and Renaissance in Louisiana’s River Parishes</title>
<link href="https://hdl.handle.net/1721.1/156145" rel="alternate"/>
<author>
<name>Allen, Trace</name>
</author>
<id>https://hdl.handle.net/1721.1/156145</id>
<updated>2024-08-15T04:00:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Black Collective Memory as Economic Development Practice: Resistance and Renaissance in Louisiana’s River Parishes
Allen, Trace
Incentivized by federal industrial policy, regional economies around the United States&#13;
are entertaining transitions to sustainable economies. This thesis investigates the role of&#13;
a Black collective memory in shaping the past, present, and future of these economies.&#13;
Utilizing case studies, this thesis profiles two visionary, trailblazing environmental justice&#13;
organizations, Rise St. James and The Descendants Projects. These organizations are&#13;
situated two rural, Black towns (St. James and Wallace respectively) in Louisiana’s&#13;
River Parishes, known infamously as Cancer Alley, due to the possessing the highest&#13;
density of petrochemical infrastructure in the Western Hemisphere, marking Black&#13;
residents as sacrificial for the sake of “economic development.” These current economic&#13;
development practices are descended from what Clyde Woods described as “plantation&#13;
epistemologies” rooted in “...monopoly of land, resources, and capital…and the&#13;
immobility of Black labor” (Woods, 2017, p. 215). An economic transition rooted in this&#13;
plantation logic may soon produce heirs promoting “false solutions” to the intertwined&#13;
environmental justice and climate crises.&#13;
Moving beyond standard deficit narratives, these cases assert the agency of&#13;
these Black descendant organizations (and their ancestors) in leveraging a Black&#13;
collective memory to both “stop the bad” and to “build the good”. This is denoted by the&#13;
Black collective memory of the nation’s largest slave rebellion occurring in the River&#13;
Parishes and in these organizations leading and embodying development rooted in&#13;
honoring these ancestors. As we embark on this seismic economic transition, what&#13;
lessons can be learned from these environmental justice leaders to embody Dr. David&#13;
Pellow’s claim, “these threatened bodies, populations, and spaces are indispensable to&#13;
building socially and environmentally just and resilient futures for us all” (Pellow, 2016,&#13;
p.227)?
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Decarbonization of California’s Transportation Sector: A Comparative Analysis of Aviation, Electric Vehicles, and High-Speed Rail using the ASIF Framework</title>
<link href="https://hdl.handle.net/1721.1/156144" rel="alternate"/>
<author>
<name>Becerril, Kimberly</name>
</author>
<id>https://hdl.handle.net/1721.1/156144</id>
<updated>2024-08-15T03:04:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Deep Decarbonization of California’s Transportation Sector: A Comparative Analysis of Aviation, Electric Vehicles, and High-Speed Rail using the ASIF Framework
Becerril, Kimberly
This thesis explores California's journey towards deep decarbonization in the transportation sector, focusing on the pivotal roles of aviation, electric vehicles, and high-speed rail. Using the ASIF framework, it analyzes the Activities, Share, Intensity, and Fuel of each mode, aiming to illustrate the intricate ways that different transportation options contribute to greenhouse gas emissions. By examining the challenges and opportunities presented by these transportation modes, the thesis underscores the need for comprehensive strategies that transcend incremental technological improvements. Bearing California's ambitious climate goals in consideration, this report explores the complex interplay between transportation, urban development, and land-use patterns, highlighting the importance of systemic changes for achieving sustainable mobility. Through a comparative analysis and case study, the thesis offers valuable insights into the impact of different transportation systems on California’s transition towards a carbon-neutral future.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wildly Inaccessible: Reaching Public Lands via Public Transit</title>
<link href="https://hdl.handle.net/1721.1/156143" rel="alternate"/>
<author>
<name>O'Connell, Nineveh</name>
</author>
<id>https://hdl.handle.net/1721.1/156143</id>
<updated>2024-08-15T03:27:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Wildly Inaccessible: Reaching Public Lands via Public Transit
O'Connell, Nineveh
Public transit offers a valuable method to sustainably connect people to highly sought amenities, including outdoor spaces. Outdoor recreation has grown in popularity, with a particular uptick in outdoor activity in response to the Covid-19 pandemic. Additionally, a growing body of research has demonstrated the public health benefits of access to open space. Auto-dependency in the United States often requires visitors to arrive to outdoor spaces via personal vehicle, generating carbon emissions and limiting outdoor access for communities without reliable access to cars. Further, high demand for outdoor access has resulted in visitors parking in unauthorized spots along the shoulder of roads when trailhead parking lots reach capacity, creating congestion and unsafe road conditions as people walk between their cars and trailheads alongside moving traffic. City dwellers and the environment would benefit from public transportation services connecting densely populated areas to beloved outdoor spaces. This paper explores how fixed-route public transit has brought urban communities closer to nearby trailheads with two examples in the American West: Trailhead Direct in King County, WA and the Muir Woods Shuttle in Marin County, CA. Both programs were implemented in the twenty-first century in response to unsafe conditions at trailhead parking lots, yet they have grown to operate under very distinct models. Sequencing the evolution of these transit to trails programs relative to stated program goals provides insight into the degree to which they have been successful, and what further work could be done to improve visitor experience, prioritize ecosystem protection, and increase equitable access to the outdoors. Adaptation to unforeseen circumstances, creative marketing and routing tailored to a clear customer group, and securing funding from relevant stakeholders have constantly influenced both programs. These case studies showcase the value of partnerships between land managers and transit agencies, and analysis of their history highlights key components to consider when designing sustainable, reliable transit to trails service.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Home, Again: Recommendations for strengthening social and financial post-buyout outcomes of the New Jersey Blue Acres Program</title>
<link href="https://hdl.handle.net/1721.1/156142" rel="alternate"/>
<author>
<name>Zhao, Elisha Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/156142</id>
<updated>2024-08-15T03:33:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Home, Again: Recommendations for strengthening social and financial post-buyout outcomes of the New Jersey Blue Acres Program
Zhao, Elisha Rose
The buyout of homes that have undergone significant cumulative flood damage, or are poised to do so, is an increasingly relied upon tool used by government agencies, including the New Jersey Department of Environmental (NJDEP) Protection Blue Acres Program, to adapt within the arena of accelerating climate change. Buyouts act primarily as a form of climate adaptation; residents voluntarily move away from areas of high flood risk and are equipped generally with the market value of their former homes to find safer housing, while the former homes are demolished and made into open space. The post-buyout process, which is where social and financial consequences crystallize materially, forms the focus of this thesis study. In the case of Blue Acres, much effort is made to guide participants towards eligible incentives or supplemental relocation assistance on top of their appraisal value, which requires relocating outside of a flood zone and/or within the same community. Additionally, Blue Acres has established itself within a larger network of community organizations and other state agencies that it can point participants to for disaster recovery relief and housing counseling.&#13;
&#13;
Nevertheless, its post-buyout process has potential to make concrete many of the improvements that buyout scholars across the U.S. advise, further strengthening its role as a national pioneer in managed retreat. I propose five recommendations based on this literature: establishing a tracking system of outcomes, creating a low-income homeowners relocation incentive, expanding on the Smart Move pilot program, involving former and remaining residents to decide how bought-out land in their neighborhood is used, and collaborating with municipalities to bring buyouts into their long-range adaptation planning. These form the basis of my question: How can Blue Acres strengthen the post-buyout branch of its services to ensure better long-term social and financial outcomes for its participating homeowners?&#13;
&#13;
Keywords: Flood, buyouts, climate adaptation, climate resilience, municipal finance, local government, housing, land use, community engagement
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Factories to Classrooms: The Influence of FDI-Led Industrialization on Educational and Vocational Training Infrastructure in Binh Duong Province, Vietnam</title>
<link href="https://hdl.handle.net/1721.1/156141" rel="alternate"/>
<author>
<name>Trinh, Linh</name>
</author>
<id>https://hdl.handle.net/1721.1/156141</id>
<updated>2024-08-15T03:01:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Factories to Classrooms: The Influence of FDI-Led Industrialization on Educational and Vocational Training Infrastructure in Binh Duong Province, Vietnam
Trinh, Linh
This thesis addresses the gap in the literature concerning the impact of Foreign Direct Investment FDI-led industrialisation on educational and vocational training infrastructure in Binh Duong Province, Vietnam. The gap highlights disparities between urban and rural areas and the misalignment between skills provided by local educational and vocational training institutions and those demanded by FDI-driven industries.&#13;
&#13;
The research question guiding this study is: How has Binh Duong Province developed its human resources to meet the demands of its economic and industrial development over the past two and a half decades? This study explores the direct impacts of FDI on the province's economic and industrial landscape, how educational and vocational training systems in response to industrial demands exhibited by investments in physical infrastructure, the alignment between schooling and training outputs and industrial requirements, and the challenges and gaps in current human capital development strategies.&#13;
&#13;
Employing a mixed-methods approach, the research combines quantitative data analysis with qualitative insights. Quantitatively, it uses Pearson correlation analysis to examine the relationship between industrial development and educational infrastructure development, alongside geospatial mapping for spatial insights. Qualitative methods include an extensive review of human capital development strategies, legal frameworks, and global educational and vocational training models.&#13;
&#13;
Key findings indicate significant gaps in Binh Duong's educational and vocational training systems. Despite substantial FDI inflows transforming the province into an industrial hub, there is a misalignment between educational and vocational training outputs, and the skills required by industries, especially in high-tech sectors. The study underscores the need for reforms in educational and vocational training programmes, advocating for tailored vocational training, a shift towards a market-driven human development strategy, and stronger partnerships between public and private sectors in both education and industry.&#13;
&#13;
This research concludes that bridging the gap between industrial needs and educational outputs is crucial for sustainable economic growth and enhancing Binh Duong’s competitiveness. It provides actionable insights for policymakers and industry stakeholders to develop integrated strategies ensuring a skilled and adaptable workforce for a modern economy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Random Access Satellite Communication in the Presence of Interference</title>
<link href="https://hdl.handle.net/1721.1/156140" rel="alternate"/>
<author>
<name>Copley, Jonathon H.</name>
</author>
<id>https://hdl.handle.net/1721.1/156140</id>
<updated>2024-08-15T03:01:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Random Access Satellite Communication in the Presence of Interference
Copley, Jonathon H.
This thesis explores a random access service for satellites with the possibility of unintentional or intentional interference. Previous work does not address large bandwidth-delay product systems and interference. This thesis combines the challenges of each, developing a methodology for modeling and stabilizing the random access protocol, accommodating long delays, and mitigating interference.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nonlinear Microscopy for Materials Analysis and Clinical Pathology</title>
<link href="https://hdl.handle.net/1721.1/156139" rel="alternate"/>
<author>
<name>Doshi, Sagar P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156139</id>
<updated>2024-08-15T03:44:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Nonlinear Microscopy for Materials Analysis and Clinical Pathology
Doshi, Sagar P.
From understanding biological systems to characterizing materials, microscopy has facilitated the analysis of micro and nanoscale systems across scientific disciplines. The optical&#13;
transparency of different biological features allows pathologists to relate what they see on a microscope slide to fundamental mechanisms of disease. The same notions of micro-nano&#13;
sized features and optical transparency make microscopy an extremely effective technique for analyzing material properties. Nonlinear microscopy (two-photon absorption fluorescence)&#13;
was used to image surgical specimens in a clinical pathology practice. The optical system design of the instrument is explained, and its performance in terms of diagnostic accuracy&#13;
(sensitivity/specificity) and speed is presented. Exploratory, qualitative studies of imaging histopathologies beyond breast and prostate tissue are also provided. Towards the development&#13;
of high efficiency frequency converters for visible-near infrared light, periodic poling of thin film lithium niobate (TFLN) was conducted. State-of-the-art poling for quasi phase&#13;
matching was achieved via an iterative process. Devices were poled in a custom-built high voltage probing setup and imaged with a second harmonic generation (SHG) microscope to&#13;
provide feedback on the poling parameters. A select number of samples were also imaged with piezo force microscopy. The effect of poling parameters on grating quality is analyzed,&#13;
and the effect of the SHG microscope system design on image quality is quantified. Finally, a successful demonstration of SHG in a TFLN device is shown.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing misspecification in contextual optimization</title>
<link href="https://hdl.handle.net/1721.1/156138" rel="alternate"/>
<author>
<name>Bennouna, Omar</name>
</author>
<id>https://hdl.handle.net/1721.1/156138</id>
<updated>2024-08-15T03:22:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Addressing misspecification in contextual optimization
Bennouna, Omar
We study the predict-then-optimize framework approach, which combines machine learning and a downstream optimization task. This approach entails forecasting unknown parameters of an optimization problem and then resolving the optimization task based on these predictions. For example, consider an energy allocation problem when the energy cost in different areas is uncertain. Despite the absence of precise energy cost values at the time of problem-solving, machine learning models are employed to predict these costs, and the resulting optimization problem, which consists for example of minimizing energy costs while meeting some minimal requirements, is solved using state-of-the-art optimization algorithms. When the chosen hypothesis set is well-specified (i.e. it contains the ground truth predictor), the SLO (Sequential Learning and Optimization) approach performs best among state of the art methods, and has provable performance guarantees. In the misspecified setting (i.e. the hypothesis set does not contain the ground truth predictor), the ILO (Integrated Learning and Optimization) approach seems to have better behavior in practice, but does not enjoy theoretical optimality guarantees. We focus on the misspecified setting. In this case, there is no known algorithm that rigorously solves this prediction problem. We provide a tractable ILO algorithm which successfully finds an optimal solution in this setting. Our approach consists of minimizing a surrogate loss which enjoys theoretical optimality guarantees as well as good behavior in practice. In particular, we show that our approach experimentally outperforms SLO and previous ILO methods in the misspecified setting.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Complex Dynamics of Regenerating Urban Vacancy: A Case Study of Songkhla, Thailand</title>
<link href="https://hdl.handle.net/1721.1/156137" rel="alternate"/>
<author>
<name>Sahacharoenwat, Ponpat</name>
</author>
<id>https://hdl.handle.net/1721.1/156137</id>
<updated>2024-08-15T03:12:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding the Complex Dynamics of Regenerating Urban Vacancy: A Case Study of Songkhla, Thailand
Sahacharoenwat, Ponpat
This thesis investigates the pervasive issue of urban vacancy in Songkhla City, Thailand, characterized by the prevalence of vacant, abandoned, or underutilized properties. Such urban vacancies arise from a complex interplay of factors, including economic downturns, demographic shifts due to urban depopulation or migration, speculative real estate practices, and disparities in urban development and public infrastructure. These vacancies contribute to urban decay, affecting the vitality and functionality of city centers and leading to economic and social issues.&#13;
The thesis employs causal loop analysis to illustrate the complex interactions involved in regenerating urban vacancies. The thesis begins with a comprehensive overview of the urban vacancy crises in Songkhla City. Following this, the study delves into an analysis of the dynamics involved in regenerating these urban vacancies. It particularly emphasizes the role of private investment and evaluates the impact of existing urban planning tools and policies, as illustrated through causal loop diagrams. Subsequently, the thesis proposes specific strategies and strategic actions aimed at revitalizing these vacant spaces. These proposed measures are integrated into another causal loop diagram to assess their potential impacts on the urban dynamic. Finally, the thesis concludes with a discussion of broader policy implications, reflecting on how the insights gained from Songkhla City could inform and influence national-level policies aimed at revitalizing secondary cities across Thailand.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Agroecological Response to the Militarized Urban in Vieques, Puerto Rico</title>
<link href="https://hdl.handle.net/1721.1/156136" rel="alternate"/>
<author>
<name>Ouadani, Oussama</name>
</author>
<id>https://hdl.handle.net/1721.1/156136</id>
<updated>2024-08-15T03:05:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Agroecological Response to the Militarized Urban in Vieques, Puerto Rico
Ouadani, Oussama
Between 1942 and 1950, the United States Navy forcefully occupied and constructed three military facilities, known collectively as the Atlantic Fleet Weapons Training Facility, on the Puerto Rican island-municipality of Vieques. In the process, the Navy dispossessed 70% of the land and displaced 50% of the population, artificially precipitating Vieques’s shift from a rural to urban society. After an errant bomb killed a local, intense grassroots mobilizations succeeded in ousting the Navy from Vieques in 2003, but the extensive ecological and social harm it generated was devasting and enduring. This thesis will contextualize within Vieques the production of what Palestinian urban scholar Abreek-Zubiedat (2023) novelly terms the militarized urbanism(s) and highlight the island’s contemporary agroecological movement in response to it. This thesis then traces how the militarized urban emerged and operated in Vieques vis-à-vis displacement- resettlement logics, the imposition of spatial prohibitions and ecocide, and the gamification of land and society. Finally, I offer possibilities for reimagining our ecological and urban spaces in Vieques and beyond. Complimenting my embodied, archival, and theoretical research methodology is an affective treatment of the island’s militarized history through Pedro Juan Soto’s novel Usmaíl, published and set in mid-20th century Vieques.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Step By Step: Suburban Active Transportation Planning in Spring Hill, Tennessee</title>
<link href="https://hdl.handle.net/1721.1/156135" rel="alternate"/>
<author>
<name>Tucker, Keili A.</name>
</author>
<id>https://hdl.handle.net/1721.1/156135</id>
<updated>2024-08-15T03:03:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Step By Step: Suburban Active Transportation Planning in Spring Hill, Tennessee
Tucker, Keili A.
Suburban form produces car dependency with its circuitous routes, segregated land uses, and sprawling development. Active Transportation (AT), defined as non-motorized travel modes such as walking and cycling, has the potential to provide suburban residents with alternative mobility options. In 2015, Spring Hill, Tennessee, a city with suburban form and no dense urban core, adopted a Bicycle and Greenway Plan (BGP) to develop an AT network. This thesis seeks to understand how AT network plans are institutionalized, maintained, and expanded through policy and other implementation tools in order to accelerate progress on the expansion of AT infrastructure in Spring Hill. The thesis begins with four case studies: Spring Hill, Tennessee; Jefferson County, Alabama; Apex, North Carolina; and Mississippi Mills, Ontario, Canada. The case studies revealed that infrastructure, policy-making, and social programs must go hand in hand for a successful network. The thesis continues with sixteen one-on-one interviews of municipal staff, elected officials, and local developers in Spring Hill. The interviews addressed perspectives on walkability, experiences with AT implementation, and ideas for improving citywide pedestrian accessibility. The interviews reinforced that separated land uses and sprawling development limit the potential for walkability. Additionally, they revealed that greenfield development has been responsible for the majority of the BGP build-out thus far. BGP implementation would benefit from more buy-in from the city through dedicated funding streams and better use of existing programs that target pedestrian infrastructure. This work contributes to Active Transportation research by investigating the unique challenges of establishing walkability in rapidly growing suburban places.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Place for Arts &amp; Culture: how arts &amp; culture play into interdisciplinary strategies for community development without displacement</title>
<link href="https://hdl.handle.net/1721.1/156134" rel="alternate"/>
<author>
<name>Tolani, Yuvika</name>
</author>
<id>https://hdl.handle.net/1721.1/156134</id>
<updated>2024-08-15T03:35:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Making Place for Arts &amp; Culture: how arts &amp; culture play into interdisciplinary strategies for community development without displacement
Tolani, Yuvika
This thesis seeks to deepen our understanding of arts &amp; culture in the context of community development at a neighborhood scale. Specifically, it asks: how might arts and culture interventions be used as part of interdisciplinary strategies to bolster marginalized communities dealing with systemic disinvestment without exacerbating development induced displacement? Focusing on el Punto in Salem, MA, it will surface tools used by a Community Development Corporation (CDC) working in a majority immigrant community in the heart of a city. In doing so, it contemplates key tensions inherent in attempting to align a development strategy with community interests. Intersecting with the work of the Metropolitan Area Planning Council Department of Arts &amp; Culture, it will then turn to Boston’s Chinatown—a deeply different context, with certain shared characteristics—as a site of further inquiry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Rural to Urban: Examining Urbanization and Quality of Life in Hawassa, Ethiopia</title>
<link href="https://hdl.handle.net/1721.1/156133" rel="alternate"/>
<author>
<name>Tesfaye, Bethlehem Fisseha</name>
</author>
<id>https://hdl.handle.net/1721.1/156133</id>
<updated>2024-08-15T03:41:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Rural to Urban: Examining Urbanization and Quality of Life in Hawassa, Ethiopia
Tesfaye, Bethlehem Fisseha
This thesis examines the rapidly urbanizing, secondary city of Hawassa, Ethiopia, studying patterns of urban growth to determine how different models of planning for urban development can influence quality of life for urban residents in Ethiopia. To do so, it proposes its own operational definition for ‘quality of life’ that constitutes (1) increased commercial activity, (2) access to affordable housing, (3) general health &amp; well-being, (4) healthy environments, (5) access to affordable transportation, and (6) community involvement &amp; sense of belonging. Hawassa is one of many secondary cities in Ethiopia and across Sub-Saharan Africa that are growing at a pace faster than the city can plan for, leading to unsustainable informal settlements that exacerbate inequity. While recent planning models have been put in place to avoid such settlements in Hawassa, such as the Urban Expansion Initiative, the Urban Local Government Development Project, and Special Economic Zones, the progress of these models and their effect on the newly urbanized population has yet to be evaluated. Furthermore, a successful model of urban planning that is self-sufficient, localized to its community, and accountable to the welfare of its population has yet to be defined. This project aims to determine a new standard for the evaluation of future urban development projects in secondary cities that incorporates equitable frameworks for decision-making in the formation of local planning policy and urban design by (1) quantitatively assessing the correlation between urban living and an inherited index for individual wealth, used as a proxy for ‘quality of life’, at a national level; (2) compiling and analyzing existing information and data on Hawassa’s recent urban development; (3) constructing a narrative of Hawassa’s city development through new data gathered from the affected population of Hawassa on attitudes towards urbanization in three key study areas of the city: BahilAdarash sub-city, Tabor sub-city, and the area around Hawassa Industrial Park.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Pilots to Stable Services: Documenting the Rise and Diversity of Microtransit in the U.S.</title>
<link href="https://hdl.handle.net/1721.1/156130" rel="alternate"/>
<author>
<name>Humann, McKenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/156130</id>
<updated>2024-08-15T03:40:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Pilots to Stable Services: Documenting the Rise and Diversity of Microtransit in the U.S.
Humann, McKenzie
In 2014, the emergence of public on-demand, ride-sharing services, known as microtransit, (re)captured the attention of techno-positive urbanists. Echoing the same arguments for demand- response transit in the 1970s, new transit technology startups like Via, Chariot, and Bridj touted microtransit as a more affordable alternative to private ride-hailing services, while promising greater efficiency and improved customer experiences compared to traditional bus services. Proponents believed this "disruptive transportation innovation" could alleviate traffic congestion and reduce vehicle emissions if scaled successfully.&#13;
&#13;
Following mixed results from early pilot programs over the previous five years, only the truly disruptive Covid-19 pandemic launched microtransit into an accelerated phase of adoption. Many transit agencies replaced underperforming bus routes with microtransit, while others used federal funding to launch new pilots designed to connect riders to existing transit nodes. Yet the sparsity of public data on microtransit services prevents researchers unaffiliated with any major technology providers from establishing baseline service metrics or comprehensively evaluating the performance of these new programs in relation to each other, let alone assess any broader effect on travel patterns.&#13;
&#13;
This thesis provides the first comprehensive documentation of microtransit's growth and trends in service design in the U.S. as a first step toward assessing its current state. A newly compiled dataset reveals the diversity and variability of microtransit programs in their service goals, types, and designs. Finally, this thesis proposes a new assessment framework to help microtransit administrators balance competing trade-offs like cost-efficiency, reliability, and flexibility based on their service goals and transit needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Purpose and Growth: Evaluating Community Land Trusts (CLTs) as an Organizational Model and the Imperative for Strategic Management</title>
<link href="https://hdl.handle.net/1721.1/156129" rel="alternate"/>
<author>
<name>Rosario, Eduardo</name>
</author>
<id>https://hdl.handle.net/1721.1/156129</id>
<updated>2024-08-15T03:26:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Balancing Purpose and Growth: Evaluating Community Land Trusts (CLTs) as an Organizational Model and the Imperative for Strategic Management
Rosario, Eduardo
The U.S. is experiencing a severe housing affordability crisis affecting both homeownership and rental markets across income levels. Community land trusts (CLTs) have gained popularity as a promising model to help preserve long-term affordable housing. However, while CLTs have been extensively studied conceptually, relatively little research has examined the strategic management choices and internal practices which ultimately impact the CLT's ability to scale its impact within communities.&#13;
&#13;
This thesis explores pivotal strategic considerations faced by CLT leaders as their organizations evolve. Through a review of the origins and philosophies underlying the CLT model, examples of CLTs across the U.S., and in-depth case studies, the research identifies three key areas where management choices are critical: 1) Clearly defining the CLT's vision, mission, and goals to maintain focus; 2) Navigating tradeoffs in organizational setup, housing types, scale, and speed of development; and 3) Aligning leadership capabilities with the CLT's growth stage.&#13;
&#13;
The findings highlight that while CLTs share the singular purpose of providing permanently affordable housing, their management priorities and pathways to impact can diverge significantly based on contextual factors and strategic decisions. This analysis provides a framework for CLT leaders to intentionally guide the trajectory of their organizations based on their specific missions, needs, market conditions, and aspirations for scale. The research aims to inform both emerging and established CLTs to maximize their impact on the housing affordability crisis.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Doing Good By Approaching Well: Enhancing a System Thinking Mindset and Ecosystem for Student Entrepreneurs Tackling Systemic Issues in Growth Markets.</title>
<link href="https://hdl.handle.net/1721.1/156128" rel="alternate"/>
<author>
<name>Briceno Brignole, Raul</name>
</author>
<id>https://hdl.handle.net/1721.1/156128</id>
<updated>2024-08-15T03:24:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Doing Good By Approaching Well: Enhancing a System Thinking Mindset and Ecosystem for Student Entrepreneurs Tackling Systemic Issues in Growth Markets.
Briceno Brignole, Raul
According to the United Nations, if current trends persist, the Sustainable Development Goals (SDGs) will not be met until 2082. The social, environmental, and economic issues behind these goals are inherently systemic and have often been inadequately addressed by governments, companies, and nonprofits. A systemic lens is essential for tackling these root causes and fostering strategic collaboration among stakeholders to achieve a systemic shift. Combining entrepreneurship with systemic thinking emerges as a powerful approach to drive this change. The Legatum Center at MIT has been at the forefront of empowering aspiring student entrepreneurs to address pressing issues in growth markets, fostering innovation and prosperity. This thesis delves into the concept of system thinking, system change entrepreneurship, and proposes tailored frameworks and recommendations for the Legatum Center. These proposals aim to cultivate a systemic change entrepreneurial environment, equip aspiring student system change entrepreneurs, and further position Legatum as a central force in MIT promoting prosperity and change through a systemic approach to purpose-driven entrepreneurship.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collective Bargaining to Community Benefits: Leveraging Organized Labor to Advance an Equitable Clean Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156127" rel="alternate"/>
<author>
<name>Oh, Sung Eun Sally</name>
</author>
<id>https://hdl.handle.net/1721.1/156127</id>
<updated>2024-08-15T03:56:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Collective Bargaining to Community Benefits: Leveraging Organized Labor to Advance an Equitable Clean Energy Transition
Oh, Sung Eun Sally
This research aims to bridge the gap in understanding how community benefits in the clean energy transition can expand opportunities for workers and communities of color, particularly within the context of Community Benefits Programs (CBP) with a focus on the role of organized labor. The federal climate legislation such as the Infrastructure Investment and Jobs Act (IIJA) and the Infrastructure Investment and Reform Act (IRA) are expected to propel the growth of the clean energy sector and it is imperative to ensure that the impact on job creation and wealth-building opportunities are equitably distributed to historically disadvantaged communities. This paper aims to analyze the position of organized labor within the federal framework for addressing equity in energy transition and its potential to bolster labor-climate movements. Positioned in the discourse on the political economy of energy transition and organized labor's historical role in advancing or impeding environmental justice and racial equity goals, this research examines traditional tools of labor and new directions posed by the community benefits movement. The research conducts s a comparative case study using qualitative data to analyze key stakeholder priorities, labor-community engagement, and enforcement mechanisms of CBAs within the auto manufacturing sector from Los Angeles, CA, and Detroit, MI. Findings suggest that organized labor possesses significant leverage in negotiating community benefits but lacks influence in shaping the overall infrastructure for implementation and enforcement. The paper recommends that federal guidelines of the CBP or other funding conditionalities could help fill this gap for coordination, resource allocation needed to shape the legal, political, and civic infrastructure to guide community benefits negotiations, implementation, and enforcement.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Coastal City Resilience and Extreme Heat Action in Zanzibar, Tanzania through Multi-Hazard Risk Assessment (MHRA)</title>
<link href="https://hdl.handle.net/1721.1/156126" rel="alternate"/>
<author>
<name>Shahdadpuri, Anushka</name>
</author>
<id>https://hdl.handle.net/1721.1/156126</id>
<updated>2024-08-15T03:48:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building Coastal City Resilience and Extreme Heat Action in Zanzibar, Tanzania through Multi-Hazard Risk Assessment (MHRA)
Shahdadpuri, Anushka
The Coastal City Resilience and Extreme Heat Action Project (CoCHAP) is an ongoing initiative of the Red Cross Red Crescent Climate Center that aims to build climate resilience in urban areas, particularly addressing extreme heat and coastal threats in Southeast Asia, Latin America, and East Africa. This project is conducted in collaboration with the International Federation of Red Cross and Red Crescent Societies (IFRC), American Red Cross (Am. RC), Global Disaster Preparedness Center, and the National Red Cross Societies. As part of CoCHAP, this thesis investigates the spatial vulnerabilities of compound risks related to heatwaves and flooding in Zanzibar, East Africa, in partnership with the Tanzania Red Cross Society (TRCS). Recent increase in temperature and precipitation have heightened Zanzibar's vulnerability. With one of the highest population densities in Africa, the region's economy heavily relies on climate-sensitive activities such as agriculture, tourism, and fishing, making it the most climate-vulnerable small island region. To understand the region's dichotomous predicament, I analyze the location-dependent climatic, socio-economic, physiological, and environmental parameters using a Multi-Hazard Risk Assessment (MHRA). The assessment evaluates three latent variables — exposure, vulnerability, and hazard — derived from remote sensing and household census survey (HCS) data. Principal component analysis and spatial analysis techniques were employed to assess the weighted vulnerability of over 100 wards (the smallest administrative zones) to both heat and flood risk. I find that while the hazard factor itself, does not pose a major risk in Zanzibar, the socio-economic conditions, coupled with inflexible planning under neoliberal frameworks, exacerbate risks, particularly in urban wards. This is evident in the distribution of flood and heat risk, which is random throughout the island city, although high land surface temperatures and precipitation are concentrated around existing built-up coastal areas. 20 wards were identified as highly vulnerable to heatwaves and coastal flooding, revealing nuanced variations in multi-risk distribution across urban, suburban, and agrarian areas, influenced by gradients from coastal low-elevation to high-elevation inland zones. Notably, tourism-dependent wards emerge as potential areas for synergistic ecological and economic gains. These findings offer crucial insights for the TRCS, informing tailored adaptation plans as part of the Zanzibar Climate Change Alliance: City Wide Risk Assessment (CWRA).
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pa’ashi National Park: Resiliency, Restoration, and Reparative Planning for California’s Tulare Lake Basin</title>
<link href="https://hdl.handle.net/1721.1/156125" rel="alternate"/>
<author>
<name>O'Neil, Hazel</name>
</author>
<id>https://hdl.handle.net/1721.1/156125</id>
<updated>2024-08-15T03:53:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pa’ashi National Park: Resiliency, Restoration, and Reparative Planning for California’s Tulare Lake Basin
O'Neil, Hazel
In 2023, a series of atmospheric rivers reawakened California's largest sleeping lake, Pa'ashi, in the California Tulare Lake Basin. This thesis supports the proposal of the Tachi Yokut Tribe, one of the watershed's Indigenous communities, to preserve Pa'ashi in the form of a new National Park. I present historical and environmental context that explains how the lake was put to sleep by Manifest Destiny-era agricultural settlement and subsequent consolidation of political control over water. I argue that the Tachi Yokut Tribe's proposal for a National Park is a pragmatic, feasible, and desirable planning response to the region's interwoven challenges of climate change, ecological imbalance, and pervasive environmental injustice. I demonstrate how the community might develop the ideas of the park further through a sample visioning process and landscape design framework for the watershed. This thesis advances a theory of "two-eyed seeing" (Bartlett et al 2012) planning practice by centering Indigenous values and planning scholarship to articulate how planners and designers might foster stronger connections between people, place, and nature when undertaking landscape-scale climate adaptation projects.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pathways to Equity: Mapping the Impacts of Nairobi’s Urban Form on Pedestrian Mobility</title>
<link href="https://hdl.handle.net/1721.1/156124" rel="alternate"/>
<author>
<name>Kifetew, Yabework Abebe</name>
</author>
<id>https://hdl.handle.net/1721.1/156124</id>
<updated>2024-08-15T03:44:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pathways to Equity: Mapping the Impacts of Nairobi’s Urban Form on Pedestrian Mobility
Kifetew, Yabework Abebe
As urban growth continues to surge in Nairobi, Kenya, many development projects focus on highway and road improvement, with little to no investment channeled into better pedestrian infrastructure. The lack of proper sidewalks and crossings in Nairobi makes walking in the city a risk that residents take every day – often affecting low-income residents that rely on walking as their main mode of transportation. Although there have been improvements to pedestrian infrastructure in recent years, pedestrian crash rates remain high, particularly along highways. By using various statistical and spatial analysis models, this study explores Nairobi’s built environment and how it may impact the patterns and behaviors of pedestrians in order to better understand where and why crashes occur. This work is grounded by an exploration of the social history of Nairobi’s built forms, and how it’s colonial past has influenced the current policies that favor car-centric mega-infrastructure. It challenges the city’s pursuit of “global” status through these policies at the cost of its residents and uses data analysis as a tool to advocate for a shift in development priorities.&#13;
&#13;
The goal of this study is to create a framework in which the built environment can be studied to identify risk factors for pedestrian safety and to provide insights on how urban design policies can improve infrastructure for pedestrians and marginalized populations. Although focused on Nairobi, the framework is designed to be applicable to other Global Majority cities that face similar urban infrastructure challenges and data scarcity. In a context where cars and highways are prioritized, this work can be leveraged for more equitable design practices in these cities and make them safer and more accessible for captive walkers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Klondike Memory Project: Klondike Memory Project: Race, Counter-Memory, and Planning Processes</title>
<link href="https://hdl.handle.net/1721.1/156123" rel="alternate"/>
<author>
<name>Thompson-Smith, Diamond</name>
</author>
<id>https://hdl.handle.net/1721.1/156123</id>
<updated>2024-08-15T03:24:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Klondike Memory Project: Klondike Memory Project: Race, Counter-Memory, and Planning Processes
Thompson-Smith, Diamond
This thesis expands on existing community-driven archiving work started by the Klondike-Smokey City Community Development Corporation (KSCCDC) in 2016 to share untold and lesser-known collective histories from Klondike and Smokey City neighborhoods in Memphis, Tennessee. Using a photo essay and forth-coming cartographic tools as dissemination methods, I aim to support communal healing and reconciliation following long histories of structural racialized disinvestment in these neighborhoods. In this project, we amplify challenges to state narratives that attempt to decontextualize Black history from racist regimes and legacies to subjugate Black and Brown epistemologies. In this thesis, I propose that memory work and acts of truth-telling offer communities that have experienced racial planning and state erasure a pathway toward acquiring justice and repairing structural harm by helping them reaffirm their identities, assert their humanity, hold perpetrators of harm accountable, and envision liberatory futures. I also claim that memory is a tool planners can employ within the reparative framework to help disrupt “rational” planning logic that attempts to discredit embodied experience and epistemologies of Black people as invalid data or “non-data.” Lastly, I insist that using critical cartographic practices such as counter-mapping further disrupts White supremacy and erasure practices embedded within rational planning logic and archival practices by situating the validity of collective memory in place and landscapes.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Jail Is Not Social Housing: making new grounds for Chinatown</title>
<link href="https://hdl.handle.net/1721.1/156122" rel="alternate"/>
<author>
<name>Zhong, Calvin</name>
</author>
<id>https://hdl.handle.net/1721.1/156122</id>
<updated>2024-08-15T03:07:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Jail Is Not Social Housing: making new grounds for Chinatown
Zhong, Calvin
This story begins at the site of The Tombs, a jail in Chinatown that is currently being doubled in size as a part of a distributed alternative to the Rikers Island Jail. The new megajail will have capacity to house up to 886 people in detention and will include space for on-site services and programming, staff facilities, and publicly accessible commercial and community space on the ground floor. This exposes how architecture behaves as a mode of cultural production and acts in service of capitalist and carceral systems. Nowhere else is this more evident than in New York City’s Chinatown - often called the final frontier for development in Lower Manhattan. Immigrants, who’ve long come in search of land, green pastures, and single-family homes, find themselves Downtown and within ethnic enclaves, where homeownership is historically and canonically low. At this site, generations of indigenous tribes, freed African communities, and various immigrant communities endure a cycle of settlement, disenfranchisement, and eventually, destruction. The city, rather than invest in its communities, responds each time with a new jail. Under this urban mode, architecture provides few forms of accessible inhabitation beyond the neo-feudal rental system and racialized prison industrial complex. It exists to extend exploitation by selling the dream of homeownership, yet only makes room to support a select few. This thesis is interested in the limited means of shelter that are encapsulated within the architectural imagination - it asks to reconsider new value systems beyond ownership and incarceration. If architecture were to reimagine how it produces - culturally, tectonically, morally - how could it act in service of the people of Chinatown, and in earnest support of the Dream that the profession has helped to proliferate? Or better yet, this thesis will reject and reverse the pattern of the site to wholly reimagine Chinatown and its dreams: first, to destroy the jail, then, to facilitate reconstruction, re-enfranchisement, and resettlement of communities lost.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing a Home Share Program for Asylum Seeking Migrants in New York City</title>
<link href="https://hdl.handle.net/1721.1/156121" rel="alternate"/>
<author>
<name>Mackin-Plankey, Francisco "Pancho"</name>
</author>
<id>https://hdl.handle.net/1721.1/156121</id>
<updated>2024-08-15T03:27:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing a Home Share Program for Asylum Seeking Migrants in New York City
Mackin-Plankey, Francisco "Pancho"
Throughout 2022, 2023, and 2024 increasing numbers of asylum-seeking migrants have sought shelter in New York City through the city’s shelter system. To provide shelter to asylum seekers, New York City expended shelter capacity by expanding services at congregate shelters, contracting with hotels to provide emergency shelter, and by opening Humanitarian Emergency Response and Relief Centers.  In late 2022, Enterprise Community Partners was contracted to evaluate the feasibility of operating a home share program for asylum-seeking migrants as an alternative to New York City’s shelter system. By early 2024 Enterprise completed the design of a home share program for asylumseeking migrants. In their analysis, Enterprise found that difficulties recruiting hosts was one of the biggest challenges to operating the home share program.  Enterprise’s program design focuses on minimizing the level of engagement and effort required of hosts, to lower the bar to entry for participation in the program. This thesis explores Enterprise’s research process, and proposes a program structure oriented on making hosting a more substantive experience and to build on the strengths of a potential program operator. Similarly, during the earlier phases of Enterprise’s research, Enterprise considered focusing the program on specific neighborhoods in New York City, but Enterprise ultimately moved on from a neighborhood specific strategy. This thesis identifies which neighborhoods would be most suitable for a home share program, based on Enterprise’s initial criteria.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cooking Together: Form &amp; Function of Community Kitchen as Urban ‘Third Place’ Promoting Community Wellbeing</title>
<link href="https://hdl.handle.net/1721.1/156120" rel="alternate"/>
<author>
<name>Heneine, Emma M.</name>
</author>
<id>https://hdl.handle.net/1721.1/156120</id>
<updated>2024-08-15T03:01:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Cooking Together: Form &amp; Function of Community Kitchen as Urban ‘Third Place’ Promoting Community Wellbeing
Heneine, Emma M.
Global food insecurity has surged in recent years, with nearly one-third of the world population experiencing food insecurity between 2020-2022. Malnutrition remains a leading cause of death globally, making food access a major determinant of health. Cities are increasingly grappling with challenges as urban populations expand and urban food systems are affected by various systemic factors including geopolitical conflicts, economic crises, environmental anomalies, and epidemics. Urban planning and physical characteristics of the urban built environment also affect food access; single-use zoning, suburbanization, rising food costs, proliferation of processed foods, and food-deserts contribute to urban food insecurity, disproportionately affecting low-income communities. As a result, urban populations have seen a rise in the prevalence of both undernourishment and obesity. Dating back centuries and found globally, community kitchens are places where food is prepared en masse by community members to address local food insecurity. During the COVID-19 pandemic, community kitchens (re)gained prominence, offering essential nourishment as well as solace and community amidst widespread hardship and isolation. Research indicates the success of community kitchens in improving nutrition, as well as a number of other benefits including improving mental health, individual and collective empowerment, environmental sustainability, and social cohesion. Despite their effectiveness, reliance on community kitchens to address food insecurity reveals a tension of whether such responsibility should fall on communities, rather than being addressed structurally. Nonetheless, community kitchens represent vital interventions in the absence of adequate public services, showcasing the collective power of communities to address food insecurity and broader social challenges. Drawing from a sample of nine contemporary community kitchens around the world, this thesis explores how community kitchens’ form and function can evolve into critical urban infrastructures, offering benefits beyond food relief to promote community wellbeing in the aftermath of a community shock. In so doing, community kitchens represent urban ‘third places’ – becoming essential informal gathering spaces for communities through their promotion of the arts and culture, education and skills building, economic development, ecological stewardship, and community development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolution of a Useful Place: The Gas Station in America</title>
<link href="https://hdl.handle.net/1721.1/156119" rel="alternate"/>
<author>
<name>Capozzi, Bennett</name>
</author>
<id>https://hdl.handle.net/1721.1/156119</id>
<updated>2024-08-15T03:25:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evolution of a Useful Place: The Gas Station in America
Capozzi, Bennett
The accelerating transition to EVs in the United States raises questions about the economic viability of fuel retailing in the coming decades; therefore it is necessary to think more deeply about the future of the 150,000 gas stations in the United States, most of which are polluted petroleum brownfields. This thesis uses three research methods to understand the past and present state of gas stations in America and the impact they have had on the built environment: historical research, site visits, and the case study method. First, the thesis explores the way that gas stations in America have adapted their form and program to changes in their political, economic, and technological environments throughout the twentieth century. Then, turning to existing sites, the thesis generates four gas station typologies based on location. These types differ based on key formal and programmatic characteristics, and they are likely to have different reuse futures in a post-gas station world. Photography and site visits capture the way that this process of reuse has already begun; the thesis documents how many former gas stations in the contemporary landscape have been redeveloped, converted to new uses, or abandoned over the past several decades. These adaptations reveal the way that context influences these sites beyond the lifespan of fuel retailing. With the understanding that the transition away from combustion-engine vehicles is likely to continue, the thesis presents a policy framework focused on three scenarios: continued fuel retailing, conversion to EV charging, and industry exit. The framework is designed to help policymakers and planners make informed decisions about how to adapt these sites as the number of gas stations in the United States steadily decreases, leaving a trail of polluted brownfields in its wake.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Claiming Identity through Space: LGBTQ+ Community Building via Commercial Development in West Hollywood and Palm Springs</title>
<link href="https://hdl.handle.net/1721.1/156118" rel="alternate"/>
<author>
<name>Ng, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/156118</id>
<updated>2024-08-15T03:21:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Claiming Identity through Space: LGBTQ+ Community Building via Commercial Development in West Hollywood and Palm Springs
Ng, Jason
Examining the relationship between queer identity and urban space, this thesis focuses on LGBT+ commercial real estate and its role in community building. Through the cities of West Hollywood and Palm Springs in California, it explores historic, contemporary, and forward-looking narratives of LGBT+-oriented commercial development, with an emphasis on retail, hospitality, and multifamily. Key questions address how LGBT+ communities claim and shape space (socially, economically, and physically) within “gayborhoods”, as well as strategies for navigating urban change. By analyzing these narratives with qualitative and quantitative methods, this thesis offers insights for developers, planners, and other stakeholders invested in creating vibrant, inclusive communities.&#13;
&#13;
This interdisciplinary mixed-methods approach includes original GIS and data analysis of historic LGBT+ establishments, demographic study, literature review, site observation, interviews with stakeholders ranging from economic development professionals to mayors, and case studies of a queer women-owned small business and LGBT+ senior living community. The findings underscore the subversive and politically charged origins of gayborhoods, characterized by authenticity, entrepreneurship, and community-centric values. The analysis also reveals challenges to gayborhood identity as West Hollywood and Palm Springs grapple with questions of gentrification vs. preservation, commercialization, and shifting demographics (aging populations, increasing affluence, mainstream audiences, etc.). &#13;
&#13;
Given increased LGBT+ acceptance in the US since the mid-century (generally speaking) and the advent of social media and dating apps, some question whether the gayborhood is dying or even necessary anymore. I argue that the gayborhood as a framework, though evolving, persists in its relevance due to its core commitment to LGBT+ community building. And its resilience is reflective of the historic legacy of the LGBT+ community itself.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Segment Unseen Tasks In-Context</title>
<link href="https://hdl.handle.net/1721.1/156117" rel="alternate"/>
<author>
<name>Butoi, Victor Ion</name>
</author>
<id>https://hdl.handle.net/1721.1/156117</id>
<updated>2024-08-15T03:52:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Learning to Segment Unseen Tasks In-Context
Butoi, Victor Ion
While deep learning models have become the predominant method for medical image segmentation, they are typically incapable of generalizing to new segmentation tasks---involving new anatomies, image modalities, or labels. For a new segmentation task, researchers will often have to prepare new task-specific models. This process is time-consuming and poses a substantial barrier for clinical researchers who often lack the resources and expertise to train neural networks. &#13;
&#13;
We present UniverSeg, an in-context learning method for solving unseen medical segmentation tasks. Given a new image to segment, and a set of image-label pairs that define the task, UniverSeg can produce accurate segmentation predictions with no additional training. We demonstrate that UniverSeg substantially outperforms existing methods in solving unseen segmentation tasks, and thoroughly analyze important aspects of our proposed data, training, and inference paradigms.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Imperfect Question of Stadium Development: A Typology of Contemporary Development and Strategies for a Sustainable Future</title>
<link href="https://hdl.handle.net/1721.1/156116" rel="alternate"/>
<author>
<name>Hill, Melissa</name>
</author>
<id>https://hdl.handle.net/1721.1/156116</id>
<updated>2024-08-15T03:01:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Imperfect Question of Stadium Development: A Typology of Contemporary Development and Strategies for a Sustainable Future
Hill, Melissa
This thesis explores the paths forward at the intersection of economic development, public space, and parking. Using OpenStreetMap data and American Community Survey estimates, this project uses GIS analysis to develop a typology of contemporary NFL stadium developments. Using illustrative case studies informed by this analysis, site visits, and pre-existing literature, the thesis evaluates the tradeoffs presented by various approaches to stadium development. Rather than recommend a single path forward, this thesis provides suggestions for working within the constraints of local landscapes to develop strategies to best support the public good in each context.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Fiber Source for Label-free Nonlinear Microscopy</title>
<link href="https://hdl.handle.net/1721.1/156115" rel="alternate"/>
<author>
<name>Cao, Honghao</name>
</author>
<id>https://hdl.handle.net/1721.1/156115</id>
<updated>2024-08-15T03:16:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adaptive Fiber Source for Label-free Nonlinear Microscopy
Cao, Honghao
Nonlinear microscopy enables label-free visualization of biological processes in live samples at sub-cellular spatial resolution and sub-millimeter penetration depth, enabling the in-vivo study of mechanisms underlying several cellular functions. Due to the low absorption cross-section of the two-photon and three-photon excitation processes, especially for the endogenous fluorophores, high peak power broadband laser sources are important in improving nonlinear microscopy generation efficiency. Multimode fibers (MMFs) are regaining interest as light sources due to their high-dimensional spatiotemporal nonlinear dynamics and scalability for high power. MMF sources with effective control of nonlinear processes would enable new possibilities in many areas, such as high-power fiber lasers, biomedical imaging, and chemical sensing, as well as a platform for investigation of intriguing physics phenomena. In this thesis, we present a simple yet effective way of controlling nonlinear effects at high peak power levels in MMFs. This is achieved by leveraging not only the spatial but also the temporal degrees of freedom during multimodal nonlinear pulse propagation using a programmable fiber shaper that introduces time-dependent disorders. We achieve high spectral-temporal-spatial tunability in the output laser pulses of the MMF, resulting in a broadband high-peak-power source. We further demonstrate its potential as a laser source for nonlinear microscopy through widely tunable two-photon and three-photon excitation. This approach provides possibilities for technological advances in a wide range of fields, such as nonlinear optics, biomedical imaging, and spectroscopy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Encouraging reuse in rural Italy: A case study implementing new frameworks to collect local data and understand feasible reprogramming strategies in Guadagnolo</title>
<link href="https://hdl.handle.net/1721.1/156114" rel="alternate"/>
<author>
<name>Consilvio, Annabel</name>
</author>
<id>https://hdl.handle.net/1721.1/156114</id>
<updated>2024-08-15T03:27:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Encouraging reuse in rural Italy: A case study implementing new frameworks to collect local data and understand feasible reprogramming strategies in Guadagnolo
Consilvio, Annabel
This thesis presents a new survey methodology for collecting data on occupancy, building typologies, and building conditions in small, depopulating towns in rural Italy. The survey methodology is split into two phases: one in which granular data is gathered through a series of visual surveys and a second in which this data is analyzed through a series of assessments aimed at identifying the most strategic buildings for reuse to support economic development. With one in three Italian municipalities losing population since 1951, this new framework aims to equip municipalities with critical data that can inform strategic reprogramming efforts and strengthen funding applications (Serico Gruppo Cresme, 2008). The research is built on the prior efforts and knowledge of Liminal, the thesis client and an organization in Italy working to build capacity within these rural communities. By providing tools like this framework, Liminal empowers residents to envision new futures and supports municipalities to realize these visions. &#13;
&#13;
This approach was tested in Guadagnolo, a rapidly depopulating town in the Monti Prenestini region of Lazio, which witnessed a 50% population decline in just two decades (Progetto - Campo Base Guadagnolo, 2022). Through this methodology, a robust and granular spatial database model of Guadagnolo’s built fabric was constructed, permitting analysis of possible sites of reuse to support a university satellite campus and develop a long-term tourism destination. The assessment methodology provided several key buildings for the town to consider adapting to support these two reuse scenarios, while also generating extensive data that the town can utilize in a variety of future initiatives and funding applications. Ultimately, this thesis endeavors to support rural Italian communities by providing a data-driven framework that can unlock funding opportunities and initiate strategic planning efforts, providing a path forward that protects the cultural and ecological richness of these small towns.&#13;
&#13;
Keywords: rural development, strategic reuse, economic revitalization, survey methodology
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Retrofitting Affordable Multifamily Housing: A Survey of Landlords in Cincinnati, Ohio</title>
<link href="https://hdl.handle.net/1721.1/156113" rel="alternate"/>
<author>
<name>Fang, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/156113</id>
<updated>2024-08-15T03:03:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Retrofitting Affordable Multifamily Housing: A Survey of Landlords in Cincinnati, Ohio
Fang, Emily
Building energy efficiency retrofits are a crucial part of decarbonizing the building sector and decreasing residential energy burden—low-income households, renters, and residents of multifamily buildings disproportionately bear this burden. This study serves as a case study on WarmUp Cincy (2020-2022), a local government-led pilot program that provided grants to landlords of affordable multifamily housing to help implement energy efficiency retrofits. In partnership with the City of Cincinnati Office of Environment &amp; Sustainability, I assess results from the pilot program, develop and analyze a survey of affordable housing landlords in Cincinnati, and conduct interviews with key energy stakeholders in the region to answer: 1) what are landlords’ current priorities and understandings of the cost and energy savings of specific upgrades, and 2) what energy efficiency program elements will be most effective in serving these buildings? As the City transitions towards a second phase of WarmUp Cincy to better address its climate and energy equity goals, this study seeks to provide insight on how to approach key program design questions, such as selecting a program administrator and determining a list of eligible technologies. In addition, this study explores WarmUp Cincy’s synergies with other federal and state funding programs, WarmUp Cincy’s continuing role in addressing local planning challenges of outreach and workforce development, and the importance of program evaluation as building technologies, funding opportunities, and community education change over time.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Civic Design Room: Conversations on What It Looks Like To Operationalize Design in Government? With Community, Within Government, and Your Team</title>
<link href="https://hdl.handle.net/1721.1/156112" rel="alternate"/>
<author>
<name>N'Diaye, Mariama</name>
</author>
<id>https://hdl.handle.net/1721.1/156112</id>
<updated>2024-08-15T03:06:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Civic Design Room: Conversations on What It Looks Like To Operationalize Design in Government? With Community, Within Government, and Your Team
N'Diaye, Mariama
The Civic Design Room is a podcast and media thesis project that engages designers in the public sector, primarily in the US, on how they have operationalized design methodologies in the public sector. This podcast is a series of thirteen forty-five-minute to one-hour episodes, each featuring a different guest. These guests range from current or former US federal and local government employees to urban planners and designers working in local US governments and researchers based internationally in Colombia, the United Kingdom, and Finland.&#13;
Each episode covers similar topics of design, politics, and the management skills needed to foster an innovative team in government. This thesis calls for a new mode of design - Caring Systems Design, which seeks to infuse principles of care ethics - attentiveness, responsiveness, competence, and responsibility - throughout the multiple, nested levels of government work - from the individual and team level to cross-departmental collaboration, to engaging with external communities and stakeholders. The project will live on Spotify, and the notes of each episode include supportive materials for those listening. The written thesis represents the breadth of my research, including the methods and processes used to create the podcast, the findings from each podcast, and the implications of my findings and strategies in urban planning and the public sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A New Era for the Old Dominion: Strategies for the Virginia State Government to Lead an Equitable &amp; Ambitious Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156111" rel="alternate"/>
<author>
<name>Jaye, Dyanna</name>
</author>
<id>https://hdl.handle.net/1721.1/156111</id>
<updated>2024-08-15T03:39:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A New Era for the Old Dominion: Strategies for the Virginia State Government to Lead an Equitable &amp; Ambitious Energy Transition
Jaye, Dyanna
In August 2022, President Biden signed the Inflation Reduction Act, which mobilizes nearly $1 trillion in federal investment, primarily for decarbonization and a clean energy transition.1 Alongside other federal legislation, this marks a transformation in the approach to economic policy in the United States: it is a move away from neoclassical economic policy and a move towards mission driven industrial policy. In this era of transition, the emerging clean energy industry faces a particular legal regime where much of the authority over and regulation of our energy system happens at the state level. Recognizing this dynamic, this thesis is a case study on the Virginia state government and aims to analyze and identify effective policy tools to reduce GHG emissions at the state level, including transitioning away from fossil fuel power generation, increasing energy efficiency and load flexibility, and stimulating clean energy generation. This case study is structured in three parts: (1) an institutional analysis and energy profile of Virginia, (2) a history and analysis of energy regulation in Virginia, and (3) a climate and energy policy analysis. I conclude with five recommendations for state leadership to support the emerging clean energy industry and a climate transition that prioritizes the health, wellbeing, and economic gains for Virginia communities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Community Transportation Acts Archive</title>
<link href="https://hdl.handle.net/1721.1/156110" rel="alternate"/>
<author>
<name>Oliver, Elyse</name>
</author>
<id>https://hdl.handle.net/1721.1/156110</id>
<updated>2024-08-15T03:28:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Community Transportation Acts Archive
Oliver, Elyse
The Community Transportation Acts Archive is a planning and design approach for advancing transportation planning based in the experiences of individuals with mobility challenges and/or transit reliance. “Community transportation acts” are actions taken by residents—whether within systems or informally—to address mobility needs not met by existing policy and service. In the pages that follow, I introduce the process for building a place-specific community transportation acts archive for the Greater Portland region of Maine and outline the value that such an archive provides to the public and transportation planners. &#13;
&#13;
The Greater Portland Community Transportation Acts Archive (the Archive) draws attention to residents’ challenges in transportation, their impact and influence on transportation planning, and their visions for transportation in the Greater Portland region of Maine. The Archive comes together through reparative archiving, an archival approach based in critical studies that focuses on the records and stories of individuals and groups with underrepresented perspectives in existing historical narratives. Reparative archiving draws from Black studies, Indigenous studies, queer studies, etc. and encourages expansive and inclusive record collection and interpretation practices. I hypothesize that engagement with the Greater Portland Community Transportation Acts Archive—by the public and planners—will contribute to new and novel transportation initiatives in and around Portland, ME that better support mobility for those with the greatest transportation barriers. This thesis documents the first test of this hypothesis—my own engagement, as a planner, with the Archive—and presents a prototype archival product ready for further testing as part of upcoming Greater Portland planning efforts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Message From The Grassroots: Exploring Black liberation in grassroots economic practice and planning in the Americas</title>
<link href="https://hdl.handle.net/1721.1/156109" rel="alternate"/>
<author>
<name>Cole, Austin K.</name>
</author>
<id>https://hdl.handle.net/1721.1/156109</id>
<updated>2024-08-15T03:33:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Message From The Grassroots: Exploring Black liberation in grassroots economic practice and planning in the Americas
Cole, Austin K.
Building from theories of underdevelopment and economic warfare on Black peoples (Africans and Afrodescendants) globally, this study brings into the fields of urban planning and local community &amp; economic development the analytic and urgency of the Black Radical Peace Tradition. This involves an exploration of alternatives to traditional paradigms of economic development and planning that might help reclaim and reconstitute “the economy” towards practices and efforts that serve human life and dignity, popular sovereignty, connection to the Earth, and self-determinative capacities of African peoples throughout the Americas. Intent on contributing toward an anti-colonial praxis in this field, the following study is in part an application of the lens of Black political economy to geographic and urban challenges. It is also an exploration of grassroots people-centered efforts, both operating within the spatial-political confines of empire and those revolutionary programs outside of its physical bounds. And finally, it is a reflection on the possible purposes and roles of the “intellectual” and “planner” in supporting the liberation of Black peoples in the Americas, as part of the program of the liberation of all peoples globally.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Mirror Project: A Portrait of Urban Inequality</title>
<link href="https://hdl.handle.net/1721.1/156108" rel="alternate"/>
<author>
<name>Phya, Nolen</name>
</author>
<id>https://hdl.handle.net/1721.1/156108</id>
<updated>2024-08-15T03:46:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Mirror Project: A Portrait of Urban Inequality
Phya, Nolen
Photography has historically played a vital role in highlighting urban inequality, as seen in the work of Jacob Riis documenting late 19th-century New York City. Today, amidst ongoing gentrification, traditional mapping methods often fall short in capturing the lived experiences of communities. To address this, my thesis proposes using photography to document contemporary urban inequality in New York City. By engaging native or local New Yorker photographers and providing them with free black-and-white film rolls, the project aims to create an authentic archive of images reflecting the realities of gentrification. This approach not only offers a nuanced understanding of the phenomenon but also serves as a catalyst for empathy, dialogue, and action among policymakers, activists, and the broader public. Ultimately, the project seeks to empower communities and contribute to more equitable urban development.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On Power: How Colombia’s National Oil Company Can Support the Country’s Energy Transition</title>
<link href="https://hdl.handle.net/1721.1/156107" rel="alternate"/>
<author>
<name>Beron, David</name>
</author>
<id>https://hdl.handle.net/1721.1/156107</id>
<updated>2024-08-15T03:39:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On Power: How Colombia’s National Oil Company Can Support the Country’s Energy Transition
Beron, David
This thesis is organized in two parts. Part I argues that national oil companies, which now own and produce most of the world’s oil, will be protagonists in the transition to low-carbon energy sources. The pathways that these companies take will be distinct from country to country and will define how the transition plays out globally. Part II sites my analysis in Colombia. It is an exercise in memory, reflection, and imagination based on a series of conversations with current and former decisionmakers in the country’s energy sector. I show how the power supply crisis of 1992 revealed inseparable links between climate, energy, capital, and policy. I argue that growing and greening the power sector will require stronger central planning and favoring power purchase agreements over spot transactions. And I envision a country in which Colombia’s state-owned Ecopetrol is no longer an oil company. It contributes to a sovereign wealth fund for the country’s transition, leads R&amp;D efforts, and has become an important player in power transmission and generation. Ecopetrol sells green hydrogen — instead of fossil fuels — to Europe and Asia. It has shifted from geology to geography, from offshore drilling to offshore wind. Is this country inherently different from twenty-first century Colombia?
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How post-pandemic public transit journeys can inform employers’ return to office strategies in Boston, MA and Washington, DC</title>
<link href="https://hdl.handle.net/1721.1/156106" rel="alternate"/>
<author>
<name>Uzoh, Nwakaego</name>
</author>
<id>https://hdl.handle.net/1721.1/156106</id>
<updated>2024-08-15T03:25:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How post-pandemic public transit journeys can inform employers’ return to office strategies in Boston, MA and Washington, DC
Uzoh, Nwakaego
This research focuses on the changes in public transit in Boston, Massachusetts, and Washington, D.C., against the backdrop of companies facing challenges in bringing employees back to the office. These challenges include rolling back official in-office dates due to resistance from remote-capable employees  experiencing significant shifts catalyzed by the pandemic and layered upon decades of transit disinvestment in the United States. The study builds on previous research on work-from-home trends among white-collar workers, leading to the central question of how employers in dense urban areas can manage a return to the office amidst fluctuating public transit service levels and changes in job accessibility.&#13;
&#13;
To address this question, the research analyzes housing affordability and public transit service levels in Boston and D.C. for three design and development companies. It aims to determine the potential success rates for returning to the office for two specific job roles. The findings suggest that an income-informed approach to returning to the office, coupled with strategies to align employee preferences with best practices, can be beneficial.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reflective Planning and Design for Community Resilience: A Case Study in a Vulnerable and Shrinking Japanese Village</title>
<link href="https://hdl.handle.net/1721.1/156104" rel="alternate"/>
<author>
<name>Okai-Yabe, Keiko</name>
</author>
<id>https://hdl.handle.net/1721.1/156104</id>
<updated>2024-08-15T03:29:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reflective Planning and Design for Community Resilience: A Case Study in a Vulnerable and Shrinking Japanese Village
Okai-Yabe, Keiko
This study investigates resilience strategies in rural Japanese areas characterized by population decline, demographic aging, and heightened disaster risk. I particularly examine the approach of relocating communities to safer, higher ground in regions prone to tsunamis. The focus is on Omosu Village in Numazu City, Japan, which was the first community to attempt relocation through the Disaster Prevention Collective Relocation Promotion Project (DPCRPP) in preparation for the anticipated Nankai Trough earthquake and tsunami, expected to occur within the next 20 years with a high probability. The methodology involved developing planning and design proposals, presenting these to officials in Numazu City for feedback, and revising the proposals accordingly, embodying a reflective practice approach. Due to the sensitivity of the subject, direct discussions with residents were not possible; instead, I analyzed recorded materials from a 2012-2013 workshop on hill relocation and responses from 106 residents to a post-workshop questionnaire to gather insights and integrate them into my planning and design.&#13;
&#13;
The findings highlight a disconnect between areas supported by Japan’s Location Optimization Plan (LOP) and Small Hub Development (SHD), which complicates relocation efforts for villages like Omosu, situated in these policy gaps. This study offers policy-related recommendations for addressing the challenges faced by shrinking settlements caught in these gaps and demonstrates the potential of village design to incorporate long-term planning over the next two decades, addressing both disaster prevention and everyday livelihood sustainability. The results underscore the viability of previously considered impossible relocations to higher ground and outline the necessary steps to accomplish this. Furthermore, the study emphasizes the significance of a holistic planning and design approach that safeguards residents’ lives and invigorates community spirit in rural villages enriched with natural resources and cultural heritage.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Jaywalking Index: Visual and Socio-demographic Patterns in London</title>
<link href="https://hdl.handle.net/1721.1/156103" rel="alternate"/>
<author>
<name>de Castro Filho, Fabio Marcel</name>
</author>
<id>https://hdl.handle.net/1721.1/156103</id>
<updated>2024-08-15T03:56:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Jaywalking Index: Visual and Socio-demographic Patterns in London
de Castro Filho, Fabio Marcel
This quantitative research delves into the intricate dynamics of pedestrian safety, urban design, and behavioral analysis within the overarching framework of Vision Zero principles in London, UK. With a specific emphasis on comprehending jaywalking behavior, this study investigates the sociodemographic characteristics of jaywalkers and examines the correlation between urban design features surrounding jaywalking crashes. Employing GIS, the research analyzes 25,732 pedestrian crashes and utilizes Visual Artificial Intelligence to segment 280,000 images obtained from Google Street View. Key findings encompass the sociodemographic profiles of jaywalkers and the formulation of a jaywalking index, which serves as an initial tool for identifying areas warranting further investigation in urban design. This index aids in pinpointing regions with a heightened probability of pedestrian crashes, offering valuable insights for proactive urban planning and safety enhancement measures.&#13;
&#13;
Keywords Urban Design; Urban Science; Mobility; Visual Artificial Intelligence; Computer Vision.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing In-Garage Charging Schedules to Maximize Electrified Mileage for Electric Bus Fleets</title>
<link href="https://hdl.handle.net/1721.1/156102" rel="alternate"/>
<author>
<name>Wu, Yen-Chu</name>
</author>
<id>https://hdl.handle.net/1721.1/156102</id>
<updated>2024-08-15T03:26:39Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimizing In-Garage Charging Schedules to Maximize Electrified Mileage for Electric Bus Fleets
Wu, Yen-Chu
Many transit agencies across the US are working towards a zero-emission electric bus fleet in order to reduce petroleum use and carbon emissions. This thesis presents a data-driven approach to optimize short-term in-garage charging schedules of electric buses, aiming to enhance operational efficiency in public transportation systems. We estimate the energy required for each trip using historical data on temperature, ridership, and speeds. The proposed mixed-integer programming (MIP) model maximizes total electrified mileage while considering constraints related to charger configuration, block schedules, energy requirements, and battery capacity. To solve this complex problem in a reasonable timeframe, we further decompose the problem into two phases. The initial phase involves determining which blocks should be serviced by the same bus and establishing a schedule that covers each block exactly once. The subsequent phase focuses on identifying the optimal in-garage charging schedule and deciding which blocks should be electrified, considering the schedule from the first phase. The model’s effectiveness is demonstrated through a case study using real-world data from the Chicago Transit Authority (CTA). Future scenarios and sensitivity analyses, considering variations in available electric buses, charger configurations, and risk tolerance in estimated energy requirements for each block, offer comprehensive and valuable insights for the adoption of electric buses and chargers. Key findings include: (a) slow chargers may be more cost-effective than fast ones, given recent block schedules and cost estimates, (b) customizing charging strategies maximizes electrified distance but poses operational challenges, (c) agencies should assess the trade-offs between the electrifiable distances and the risk of running below specified state of charge (SOC) thresholds, (d) lower battery degradation may reduce the required number of buses for the same electrified mileage, and (e) seasonal analyses reveal that significantly more miles can be electrified during summer compared to winter due to the lower energy required for trips on warmer days.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating Shared Vulnerabilities: Climate Adaptation on the Split Island of Saint Martin</title>
<link href="https://hdl.handle.net/1721.1/156101" rel="alternate"/>
<author>
<name>Flamme, Emilie</name>
</author>
<id>https://hdl.handle.net/1721.1/156101</id>
<updated>2024-08-15T03:46:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Navigating Shared Vulnerabilities: Climate Adaptation on the Split Island of Saint Martin
Flamme, Emilie
The island of Saint Martin on the windward side of the Caribbean Sea is a volcanic island whose low-lying areas are at high risk for flooding and storm surges as a result of exposure to increasing severe hurricanes that is compounded by sea level rise. Saint Martin’s mountainous landscape is split by two governments. The first is the Collectivity of Saint-Martin, a semi-autonomous region of France. The second is the Government of Sint Maarten, an independent island government within the Kingdom of the Netherlands. This thesis examines how both governments on the island of Saint Martin are working to develop climate adaptation strategies within a context of already existing chronic exposure to extreme climate risks. Given the administrative split and the severity of climate change, how can an island with two governments and two different approaches to climate change adapt to common future climate changes? &#13;
&#13;
The work first traces how the construction of climate adaptation expertise is shaped by perceptual biases which originate from outside the Caribbean region, often in countries like the Netherlands and France. From this engagement on the construction of expertise, the Chapter 1 traces how hurricanes have shaped how climate and weather events are understood and confronted by islanders and argues that future hurricane models articulate changes to everyday climate conditions that stand to challenge longstanding practices of resilience in the face of extreme climate events. Chapter 2 examines current climate adaptation strategies implemented in the Collectivity of Saint-Martin, and underscores the relationship between risk perception, policy formulation, and historical context by highlighting the need for locally-adapted strategies. Chapter 3 examines how the Government of Sint Maarten attempts to address climate change and climate adaptation and considers avenues for community-centered risk assessment and adaptation planning. Chapter 4 engages the limitations of both strategies in Saint-Martin and Sint Maarten, and proposes an alternative vision for climate adaptation given the shared vulnerabilities that exist for both sides of Saint Martin.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainable Homes for All: Designing a Clean Energy Incentive for Boston’s Section 8 HCV Landlords to Improve Tenant Quality of Life</title>
<link href="https://hdl.handle.net/1721.1/156100" rel="alternate"/>
<author>
<name>Houston-Read, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/156100</id>
<updated>2024-08-15T03:08:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sustainable Homes for All: Designing a Clean Energy Incentive for Boston’s Section 8 HCV Landlords to Improve Tenant Quality of Life
Houston-Read, Rebecca
There is an urgent need for decarbonization in the residential sector given housing's significant contributions to greenhouse gas emissions. Low-income housing is particularly energy inefficient, contributing to harmful environmental outcomes and health and financial challenges for tenants. The Boston Housing Authority (BHA) can play a central role in residential decarbonization for low-income residents because it owns and controls a substantial portion of the housing stock. While there are significant efforts underway to decarbonize Boston’s public housing stock, there are currently no initiatives aimed at decarbonization in the Section 8 Program. Thus, the BHA can broaden its influence beyond the public sector and incentivize residential decarbonization in the private sector through its relationships with over 15,000 landlords in the Section 8 HCV Program. This thesis develops the BHA Retrofit Rewards (BRR) Program: a Program that uses monthly ‘rent boost’ to financially incentivize Section 8 Housing Choice Voucher (HCV) landlords to implement clean energy upgrades in their units. This BRR Program was created through a two-step process. First, a comparative analysis of similar US programs identified the Atlanta Housing Authority's Energy Efficiency Rent Boost Program (EERB) as viable for replication in Boston. Second, a feasibility analysis was conducted to determine how the BHA’s adaptation of the EERB Program would be financed, administered, and redesigned to fit the Boston context. The results of this analysis outline a framework for a BRR Program financed by leveraging regulatory flexibility that enables higher payments to landlords within federal limits. This thesis contributes to ongoing equity-focused decarbonization initiatives at the BHA and offers a roadmap for public housing authorities and cities more broadly seeking to address the dual challenges of climate change and housing inequity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Databases for healing and justice: Co-design with a grassroots, Indigenous organization</title>
<link href="https://hdl.handle.net/1721.1/156099" rel="alternate"/>
<author>
<name>Shumway, Hannah</name>
</author>
<id>https://hdl.handle.net/1721.1/156099</id>
<updated>2024-08-15T03:25:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Databases for healing and justice: Co-design with a grassroots, Indigenous organization
Shumway, Hannah
This inquiry presents a grounded case study of a partnership between the Data + Feminism Lab at MIT and Waking Women Healing Institute, a grassroots, Indigenous organization. The partners co-design a case documentation and story gathering database that enables healing and justice for Indigenous women and people. The project reveals: 1) the vital role of trust-building, openness, and constant iteration in co-design practice, 2) the importance of designing for security in aligning the database with a need for Indigenous Data Sovereignty, 3) the practical trade-offs that come with choosing to use and configure commercial off-the-shelf software as opposed to using free and open source software or building custom software, and 4) how other institutional actors, like urban planners, can learn from this collaboration by centering trust-building, by welcoming ongoing revision and feedback rather than just ‘going through the motions’ of community engagement, and by taking tangible steps to enable institutional accountability to grassroots groups. Throughout, this thesis underscores the ways that a collaborative decision making process between institutional and grassroots partners allows the team to prioritize and operationalize grassroots needs and desires in a way that enables a useful technology solution for healing, harm reduction, and justice.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)</title>
<link href="https://hdl.handle.net/1721.1/156098" rel="alternate"/>
<author>
<name>Comiter, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/156098</id>
<updated>2024-08-15T03:55:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)
Comiter, Charles
Tissue biology involves an intricate balance between cell-intrinsic processes and interactions between cells organized in specific spatial patterns, which can be respectively captured by single-cell profiling methods, such as single-cell RNA-seq (scRNA-seq), and histology imaging data, such as Hematoxylin-and-Eosin (H&amp;E) stains. While single-cell profiles provide rich molecular information, they can be challenging to collect routinely and do not have spatial resolution. Conversely, histological H&amp;E assays have been a cornerstone of tissue pathology for decades, but do not directly report on molecular details, although the observed structure they capture arises from molecules and cells. Here, we develop SCHAF (Single-Cell omics from Histology Analysis Framework), a deep learning framework to generate a tissue sample’s spatially-resolved single-cell omics dataset from its H&amp;E histology image. We demonstrate SCHAF on healthy and diseased—primarily metastatic breast cancer—tissue, training with matched samples analyzed by spatial transcriptomics, sc/snRNA-seq and by H&amp;E staining. SCHAF generated appropriate single-cell profiles from histology images in test data, related them spatially, and compared well to ground-truth scRNA-seq, expert pathologist annotations, and direct MERFISH measurements. SCHAF opens the way to next-generation H&amp;E2.0 analyses and an integrated understanding of cell and tissue biology in health and disease.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Activation frame memory management for the Monsoon processor</title>
<link href="https://hdl.handle.net/1721.1/156060" rel="alternate"/>
<author>
<name>Chiou, Derek.</name>
</author>
<id>https://hdl.handle.net/1721.1/156060</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Activation frame memory management for the Monsoon processor
Chiou, Derek.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Includes bibliographical references (p. 87-89).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of the ability of RELAP5/MOD3 to model natural circulation of high pressure SF6 in the Westinghouse 1/7 scale PWR experimental facility</title>
<link href="https://hdl.handle.net/1721.1/156058" rel="alternate"/>
<author>
<name>Chmielewski, Stefan V.</name>
</author>
<id>https://hdl.handle.net/1721.1/156058</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Investigation of the ability of RELAP5/MOD3 to model natural circulation of high pressure SF6 in the Westinghouse 1/7 scale PWR experimental facility
Chmielewski, Stefan V.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1992; Includes bibliographical references (leaf 45).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and calibration of a simplified heat transmission apparatus for textiles.</title>
<link href="https://hdl.handle.net/1721.1/156057" rel="alternate"/>
<author>
<name>Hodara, Leon Ralph.</name>
</author>
<id>https://hdl.handle.net/1721.1/156057</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Design and calibration of a simplified heat transmission apparatus for textiles.
Hodara, Leon Ralph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1945; Bibliography: leaves 70-71.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strength analysis of a sandwich shell structure subjected to hydrostatic loading</title>
<link href="https://hdl.handle.net/1721.1/156056" rel="alternate"/>
<author>
<name>Cho, Wonjoon.</name>
</author>
<id>https://hdl.handle.net/1721.1/156056</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Strength analysis of a sandwich shell structure subjected to hydrostatic loading
Cho, Wonjoon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1992; Includes bibliographical references (leaves 68-69).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic aspects of materials substitution in horizontal automotive body panels : the issue of SMC surface finish</title>
<link href="https://hdl.handle.net/1721.1/156054" rel="alternate"/>
<author>
<name>Chen, Andrew Chinshun.</name>
</author>
<id>https://hdl.handle.net/1721.1/156054</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">Economic aspects of materials substitution in horizontal automotive body panels : the issue of SMC surface finish
Chen, Andrew Chinshun.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1991; Includes bibliographical references (leaves 83-84).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Area minimization techniques of single-output functions</title>
<link href="https://hdl.handle.net/1721.1/156053" rel="alternate"/>
<author>
<name>Chen, Curtis S.</name>
</author>
<id>https://hdl.handle.net/1721.1/156053</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1991-01-01T00:00:00Z</published>
<summary type="text">Area minimization techniques of single-output functions
Chen, Curtis S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1991; Includes bibliographical references (leaves 41-42).
</summary>
<dc:date>1991-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The impact of decision support systems on an organization and its planning</title>
<link href="https://hdl.handle.net/1721.1/156051" rel="alternate"/>
<author>
<name>Matteo, Thomas P.</name>
</author>
<id>https://hdl.handle.net/1721.1/156051</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">The impact of decision support systems on an organization and its planning
Matteo, Thomas P.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1984; Bibliography: leaf 66.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Mental Health in the Corporate Sphere: Evaluating Trends, Tools, and Impacts on Organizational Dynamics</title>
<link href="https://hdl.handle.net/1721.1/156050" rel="alternate"/>
<author>
<name>Zou, Yangluyao (Maria)</name>
</author>
<id>https://hdl.handle.net/1721.1/156050</id>
<updated>2024-08-13T03:08:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Digital Mental Health in the Corporate Sphere: Evaluating Trends, Tools, and Impacts on Organizational Dynamics
Zou, Yangluyao (Maria)
The escalating prevalence of mental health issues in the corporate world, exacerbated by the COVID-19 pandemic, has necessitated a reevaluation of traditional wellness programs. This thesis critically examines the integration, effectiveness, and organizational impact of digital mental health tools within corporate environments, with a particular focus on improving employee wellbeing and optimizing organizational dynamics. Grounded in a mixed-methods approach, this research encompasses an extensive literature review and 31 semi-structured interviews with a diverse cohort of stakeholders, including human resources managers, corporate executives, mental health professionals, and employees across various sectors. This methodology facilitated a deep exploration of the perceptions, challenges, and outcomes associated with the adoption of digital tools such as ecological momentary assessments, wearable biosensors, and virtual reality for emotional regulation. Key findings reveal that digital interventions, when appropriately integrated, offer substantial benefits over traditional wellness programs by providing timely, personalized, and data-driven mental health support. These technologies enable continuous monitoring and management of employee stress levels and foster a proactive approach to mental health care. Notably, the success of these digital tools is intrinsically linked to organizational changes, such as work redesign strategies that include flexible working conditions, role restructuring, and enhanced workplace social support systems. Moreover, the research highlights several barriers to the effective implementation of digital mental health tools, including cultural resistance to mental health discussions in the workplace, privacy concerns, and the need for significant shifts in organizational policies and practices. Facilitators for successful integration include leadership endorsement, the normalization of mental health conversations, and the strategic alignment of digital tools with organizational health goals. The thesis proposes a comprehensive framework for the effective integration of digital mental health tools within the corporate sector. This framework suggests that true effectiveness is achieved not only through the deployment of advanced technologies but also through fundamental enhancements to the organizational environment that foster an inclusive, supportive, and flexible workplace. This study contributes to academic and practical understandings of how digital innovations can transform corporate mental health strategies. It underscores the need for a synergistic approach that merges technology with significant organizational reforms, advocating for a holistic model that not only addresses immediate mental health needs but also fosters long-term employee wellbeing and productivity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Rebalancing the Major League Baseball Through Social Investment</title>
<link href="https://hdl.handle.net/1721.1/156049" rel="alternate"/>
<author>
<name>Perrin, Matthieu</name>
</author>
<id>https://hdl.handle.net/1721.1/156049</id>
<updated>2024-08-13T03:58:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Rebalancing the Major League Baseball Through Social Investment
Perrin, Matthieu
This thesis critically examines the enduring economic disparities within Major League Baseball (MLB) that threaten its competitive balance and the sustainability of its operations. Through an extensive data-driven analysis, this research identifies the foundational causes of the current imbalances that disproportionately favor financially robust teams and disadvantage smaller market franchises. The study leverages advanced financial metrics to cluster MLB teams and employs system dynamics modeling to simulate the impacts of existing and novel rebalancing mechanisms. The comprehensive analysis begins by detailing the financial landscape of MLB, highlighting the disparities in revenue streams between teams and their consequences on competitive balance. Using a clustering approach, teams are categorized based on key financial indicators, revealing distinct economic profiles that correlate strongly with on-field success and market presence. This categorization provides a clearer understanding of the disparities contributing to competitive imbalance. Subsequently, the research employs system dynamics to model the interactions between these financial variables and team performance over time. This model serves as a tool for testing various rebalancing strategies, including refined versions of revenue sharing and luxury taxes, which are currently employed by the league but need to address the root causes of imbalance adequately. The simulations suggest that while these mechanisms have some impact, they must be more robust when applied in their current forms. To address these shortcomings, the thesis proposes innovative strategies to redistribute 2 financial resources and talent across the league more effectively. These include adjustments to the formulas used for revenue sharing, introducing a more progressive luxury tax system, and implementing minimum spend requirements to prevent underinvestment in team competitiveness. Ultimately, this research argues for a holistic approach to reforming MLB’s economic structures, aiming to ensure a fairer competitive environment and enhancing the league’s viability for the future. By ensuring that all teams, regardless of their financial capabilities, have a genuine opportunity to compete for championships, these proposed measures aim to level the playing field and maintain the integrity and excitement of the league, fostering sustained fan engagement and growth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Healthcare with GenerativeAI: A Multifaceted Approach to Reliable Medical Information and Innovation</title>
<link href="https://hdl.handle.net/1721.1/156048" rel="alternate"/>
<author>
<name>Bennani, Taieb</name>
</author>
<id>https://hdl.handle.net/1721.1/156048</id>
<updated>2024-08-13T03:04:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Advancing Healthcare with GenerativeAI: A Multifaceted Approach to Reliable Medical Information and Innovation
Bennani, Taieb
The rapid advancements in Artificial Intelligence (AI) have transformed the healthcare industry, reshaping the way we approach patient care, medical research, and healthcare delivery. This thesis explores the journey of AI in healthcare, from its early beginnings to the current landscape of highly sophisticated conversational AI systems. We first delve into the myriad applications of GenAI and AI in healthcare, including medical imaging analysis, drug discovery, personalized medicine, conversational chatbots and beyond. Through a series of case studies and real-world examples, the thesis illustrates the successes, challenges, and lessons learned from the implementation of AI in various healthcare settings. As we navigate the uncharted territory of AI in healthcare, we critically examine the ethical implications that arise and the regulations needed. Looking towards the future, we explore the bright promise and cautionary tales that lie ahead. While the continued advancements in technology hold the potential to revolutionize disease prevention, personalize treatments, and unlock new frontiers in medical research, we must remain vigilant about the risks and unintended consequences that may arise. Central to this thesis is the introduction of a novel technology and product we developed to address the reliability of large language models (LLMs) in healthcare: Veracity-Health. By enhancing the trustworthiness and accuracy of these models, this innovative approach aims to facilitate the responsible and confident deployment of AI for the benefit of patients and physicians. This thesis aims to provide a rigorous analysis of the applications, innovations, and ethical considerations surrounding AI in healthcare. By contributing to the ongoing discourse, we hope to shape a future where the power of artificial intelligence is harnessed for the greater good, prioritizing reliability and integrity of GenAI implementation in healthcare.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Price Elasticity of Air Travel Demand Using Econometrics and&#13;
Machine Learning to Scale Up Sustainable Aviation Fuels</title>
<link href="https://hdl.handle.net/1721.1/156047" rel="alternate"/>
<author>
<name>Membreno, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/156047</id>
<updated>2024-08-13T03:49:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Price Elasticity of Air Travel Demand Using Econometrics and&#13;
Machine Learning to Scale Up Sustainable Aviation Fuels
Membreno, Mark
This study seeks to estimate the price elasticity of aviation travel demand. These insights will be used to integrate Sustainable Aviation Fuels (SAF) strategically to aid in decarbonizing the aviation industry. Econometric and machine learning models were applied to historical air travel data to see how airfare prices influence travel demand, focusing on the economy and business passenger segments. Different route segmentations were explored to gather insights into how price affects travel demand in these route segments. The models consider predictors such as GDP, oil prices, population, time of year, and other socio-economic variables to predict passenger count. The econometric models were fitted to data prior to COVID-19 since passenger behavior prior to the disruption is a better indicator for travel today. Two sets of machine learning models were trained using both data before COVID-19 and the available time frame from 2016 to 2023. The predictive accuracy of both models performed well, with the average R2 for economy and business passengers being 0.95 and 0.87, respectively. The 2SLS’s instrumental variable (IV) of oil price has been shown as weak. For most of the fitted models, the IV’s coefficients do not have a significant relationship with the endogenous variable of price in the first stage. The price elasticity values for this study show how passenger count is affected based on a 1% increase in airfare price. The econometric models can directly interpret price elasticity from their fitted coefficients based on the theory of log transforming the data. The business passenger segment’s price elasticity values ranged between 0 to-1%, indicating they are less price sensitive due to the necessity of their travels or have higher income. However, the price elasticity for economy passengers was centered around 0 and even positive in some route segments. This is counterintuitive as economy passengers are typically more price sensitive than business passengers, corresponding to price elasticity values less than 1%. 3Future recommendations to improve the models’ estimations of price elasticities. The fixed effects applied to the data set and a more granular data exploration can leverage more accurate predictors of the relationship between price and travel demand.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shutdown Dose Rate Modeling for Radiation Requirements Development and Design Trend Analysis in the ARC Fusion Device</title>
<link href="https://hdl.handle.net/1721.1/156046" rel="alternate"/>
<author>
<name>Murphy, Daniel T.</name>
</author>
<id>https://hdl.handle.net/1721.1/156046</id>
<updated>2024-08-13T03:19:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Shutdown Dose Rate Modeling for Radiation Requirements Development and Design Trend Analysis in the ARC Fusion Device
Murphy, Daniel T.
To achieve commercial viability, Commonwealth Fusion System’s ARC device must maximize its availability to produce power, thus demanding a rapid maintenance process to replace radiation-damaged components. Designing robotic systems to&#13;
operate in this radiation environment requires understanding the expected radiation levels and how design decisions impact those levels. This thesis uses the Rigorous Two-Step (R2S) methodology to scope the radiation environment and provide data for those design trade-offs that must be considered in future ARC design iterations. The first trend is Vanadium’s lower dose rate than Eurofer as a Vacuum Vessel and Blanket&#13;
Tank material in all configurations, making it the preferred candidate from a radiation perspective. Second, the model indicates that the choice in Blanket Tank material contributes non-trivially to the maintenance radiation environment. Third, the trends demonstrate minimal additional reduction in radiation levels from delaying the start of maintenance beyond 14 days after fusion ceases. The final trend shows the reduction&#13;
in the radiation field from the removal of the Blanket Tank with the Vacuum Vessel warrants future study. Finally, this thesis incorporates historical nuclear robotics experience to establish an iterative process by which to develop robotic radiation&#13;
requirements and assess maintenance decision effects on ARC-level optimality.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization and Rule-Based Models for Hospital Inventory Management</title>
<link href="https://hdl.handle.net/1721.1/156045" rel="alternate"/>
<author>
<name>Harihara, Caeley Gaw</name>
</author>
<id>https://hdl.handle.net/1721.1/156045</id>
<updated>2024-08-13T03:09:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimization and Rule-Based Models for Hospital Inventory Management
Harihara, Caeley Gaw
This thesis shows how optimization, rule-based models, and operational analytics can be used to help manage hospital surgical inventory. The models were created for AITA™, a team under Johnson &amp; Johnson’s Ethicon subsidiary. The AITA™ Smart System is an intelligent inventory management solution that stores, organizes, and distributes products via Kiosk, Smart Shelf, and Mobile Hub devices. Every device requires a planogram, or a visual representation of which products to stock and the location of each product. This project focuses on creating models to automatically build and update these planograms. The models presented in this paper have already been adopted by the AITA™ team and have begun to show accuracy and efficiency gains when compared to the current manual process. Model-designed kiosks cover, on average, 7% more historical procedures than hand-made kiosks. Also, model-generated planograms are free from manual product selection and sorting errors. From an efficiency perspective, automatically creating and updating planograms will save the AITA™ team an average of 145 hours annually for every hospital served. These accuracy and efficiency gains will add value across the entire chain of care. The AITA™ team will have more time to grow their business and to develop new features. Meanwhile, providers will save time when managing and retrieving hospital inventory, which will free up more capacity for direct patient care.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation and Strategy in the Food Industry: Trends, Challenges, and Implications for New Entrants</title>
<link href="https://hdl.handle.net/1721.1/156044" rel="alternate"/>
<author>
<name>Shen, Ting</name>
</author>
<id>https://hdl.handle.net/1721.1/156044</id>
<updated>2024-08-13T03:35:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Innovation and Strategy in the Food Industry: Trends, Challenges, and Implications for New Entrants
Shen, Ting
The food industry plays a vital role in human nutrition and health but also presents significant challenges for entrepreneurs due to its highly competitive nature. This thesis aims to provide valuable insights and strategies for new entrants in the food sector by conducting extensive market research to uncover current challenges and future trends, analyzing case studies of both successful and failed food startups using the Entrepreneurial Strategy Framework (Compass &amp; Canvas), and interviewing founders of food companies to gain further insights. The framework is systematically applied to each case study, examining various aspects such as customer identification, technology adoption, competition analysis, organizational structure, value creation and capture hypotheses, and strategic choices, along with the linkages among these elements. By identifying common patterns and deriving insights from these analyses, the thesis offers guidance for food industry startups. The practical application of this research is demonstrated by developing a systematic entrepreneurial strategy using the “Test Two, Choose One” methodology for the author’s Smarnack project, sponsored by the MIT Sandbox Innovation Fund. This example showcases how the framework can be effectively used to guide and advance the progress of a food industry startup. In conclusion, this thesis serves as a comprehensive guide for entrepreneurs seeking to enter and succeed in the competitive food industry. By leveraging market research, case study analysis, and the practical application to the author’s own project, the thesis provides valuable insights and strategies to help new entrants navigate the dynamic and challenging landscape of the food industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Digital Customer Journeys: A Comparative Analysis of Knowledge Retrieval Approaches</title>
<link href="https://hdl.handle.net/1721.1/156043" rel="alternate"/>
<author>
<name>Nicola-Antoniu, Teodor</name>
</author>
<id>https://hdl.handle.net/1721.1/156043</id>
<updated>2024-08-13T03:29:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing Digital Customer Journeys: A Comparative Analysis of Knowledge Retrieval Approaches
Nicola-Antoniu, Teodor
Since its early days in 2003, Amazon Web Services (AWS) has evolved rapidly. From a single service created to support its parent company’s e-commerce business, AWS became a leading cloud services provider. As AWS‘s product offerings and customer base expanded, its support knowledge base grew proportionally. Customers looking for self-service support solutions need novel solutions to navigate such a vast repository of information. This study explores a set of knowledge retrieval architectures designed to surface the most relevant content to customers pursuing self-service solutions within the knowledge base of a large technology company. To recommend the best content that a customer should consume next in their journey, we leverage insights about the content already seen by the customer. Our research encompasses three methodologies: semantic search utilizing large language model embeddings, a frequency-based n-gram model, and a hybrid approach integrating semantic search within a deep neural network framework. Simulations on historical data display a significant percentage of scenarios where customers would be accurately directed to the desired solution. Our findings suggest that organizations can adopt these methodologies internally to enhance digital customer journeys and pave the way for further innovations in this domain. This study addresses the immediate challenges of navigating large-scale company knowledge bases and presents the potential for scalable self-service models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Whales &amp; Wind: A Case Study on Misinformation About Renewable Energy Development</title>
<link href="https://hdl.handle.net/1721.1/156042" rel="alternate"/>
<author>
<name>Wright, Sanne Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/156042</id>
<updated>2024-08-13T03:12:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Whales &amp; Wind: A Case Study on Misinformation About Renewable Energy Development
Wright, Sanne Eva
Mis- and disinformation are being increasingly harnessed to influence public opinion and advance agendas across the globe. It has also greatly impacted renewable energy planning and development. This thesis explores misinformation in the context of offshore wind projects. Despite the clear environmental benefits and necessity of transitioning to renewable energy sources like wind, misinformation poses significant barriers to their development. Building on established research about the spread of misinformation and strategies to counteract it, this study examines the approaches adopted by pro-wind stakeholders—government entities, nonprofits/NGOs, and offshore wind developers—to address misinformation. It specifically focuses on a recent case study involving alleged correlations between offshore wind activities and whale strandings in New Jersey. Through interviews with these stakeholders and an analysis of media representations, this thesis delineates how the misinformation spread—namely through unsound claims, emotional appeals, and the collective power of existing local and national interests against offshore wind. It also examines the effectiveness of different approaches to counter these misinformation campaigns, highlighting the challenges faced by pro-wind stakeholders in ensuring accurate public understanding of the impact of offshore wind development on marine life. The thesis concludes with recommendations for improving strategies to combat misinformation and fostering a more transparent and collaborative public discourse on renewable energy development projects. These recommendations aim to be applicable across various planning contexts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Dynamics of Carbon Capture and Storage</title>
<link href="https://hdl.handle.net/1721.1/156041" rel="alternate"/>
<author>
<name>Wilson, Glenn Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/156041</id>
<updated>2024-08-13T03:01:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System Dynamics of Carbon Capture and Storage
Wilson, Glenn Andrew
A techno-economic analysis of carbon capture and storage (CCS) is presented using system dynamics. These models fully couple the CCS subsystems of carbon dioxide (CO2) capture, transport, and storage as an integrated system with feedback and control. Simulations are presented for CO2 captured from and stored proximal to a liquified natural gas (LNG) export facility along the Texas and Louisiana Gulf Coast. The simulations demonstrate that CCS is a dynamic system influenced by disequilibria, such as reservoir injectivity and varying pressures and flow rates, rather than a quasi-static mass balance operation. Key insights reveal that, within the maximum 45Q tax credit value of $85 per ton of CO2 and 12-year qualification period, an LNG-related CCS project at its final investment decision could be economically viable when levelized costs of carbon capture are below about $27 per ton of CO2. This breakeven cost of capture increases to about $36 per ton of CO2 if the 45Q tax credit qualification period is extended from 12 to 20 years. This analysis excludes the impact of any tax strategies utilizing 45Q tax credits. However, economic viability at the projects’ initial investment decision is highly dependent on inflation and the time required for permitting, construction, and post-injection monitoring, as well as the CCS operator’s expected returns. Specifically, modest cost escalation or delays in permitting or construction, common phenomena in major capital projects, significantly reduce the economic viability of CCS even with favorable subsidies under the Inflation Reduction Act.  This work has implications for policymakers and industry stakeholders: it challenges the assumption of CCS as a standalone solution for carbon abatement across all industry sectors and underscores the necessity for systems-level design and operations to maximize CCS efficiency and economics.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Middle-Mile Inventory Management Policies Through Simulation and Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/156040" rel="alternate"/>
<author>
<name>Robins, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/156040</id>
<updated>2024-08-13T03:18:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing Middle-Mile Inventory Management Policies Through Simulation and Reinforcement Learning
Robins, Matthew
This thesis explores approaches for enhancing middle-mile inventory management within the global supply chain of a large footwear and apparel company, referred to as "Atlas". The first part discusses the design and implementation of a high-performance, heuristic system to determine stock transfer order (STO) decisions between Atlas’s distribution centers. This system employs a greedy algorithm to match supply to demand while respecting resource constraints. As Atlas’s newly procured third-party solution proved insufficient for testing due to slow performance, this work develops an emulator of the production system that achieves a 30x speedup and integrates with Atlas’s end-to-end supply chain simulation framework. This emulator enabled Atlas to efficiently test different configurations and decision making rules on historical and theoretical data, providing valuable insights prior to deploying the production system. The second part investigates the potential of reinforcement learning (RL) to augment or replace Atlas’s middle-mile decision making. A simplified supply chain environment is modeled as a Markov Decision Process, and an RL agent is trained and benchmarked against optimization-based and heuristic approaches. While the RL policy does not outperform these alternatives in the simplified environment, this work provides a foundation for Atlas to explore RL applications as they scale to more realistic supply chain environments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of thermoplastic composite manufacturing with digital process intelligence</title>
<link href="https://hdl.handle.net/1721.1/156039" rel="alternate"/>
<author>
<name>Haas, Evan</name>
</author>
<id>https://hdl.handle.net/1721.1/156039</id>
<updated>2024-08-13T03:50:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimization of thermoplastic composite manufacturing with digital process intelligence
Haas, Evan
Thermoplastic composites are gaining traction in industries such as aerospace and automotive due to their mechanical toughness, recyclability, and scalable manufacturing. However, the relative nascency of thermoplastic composites and their complex production means optimal manufacturing parameters are not well characterized. Processes are often developed through trial-and-error with limited understanding of the underlying drivers of material behavior, reducing yields and stretching development timelines. This work describes a digital intelligence infrastructure built to close this knowledge gap with high-resolution manufacturing data collection. This inexpensive system, comprised of a series of Programmable Logic Controller (PLC)s, Raspberry Pi-based telemetry units, and SQL database, captures high resolution data across hundreds of shop-floor sensors. Since this effort began, scrap rates for the targeted product dropped 85%. We also describe experiments probing composites behavior during thermoforming; by monitoring parameters including pressure, temperature, cooling rate, and dimensions, the production process is characterized and controlled. A Design of Experiments (DOE) based on this platform identified temperature as the determining factor of outcome quality. Furthermore, controlling temperature by closing the loop with current sensors and infrared imaging effectively sustained high quality. Lastly, we describe the early stages of a digitally-informed New Product Development (NPD) process to reduce development times using data from this system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sustainability Analytics - Lowering Emissions With Operational Efficiency</title>
<link href="https://hdl.handle.net/1721.1/156038" rel="alternate"/>
<author>
<name>Bhakta, Shivam</name>
</author>
<id>https://hdl.handle.net/1721.1/156038</id>
<updated>2024-08-13T03:55:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sustainability Analytics - Lowering Emissions With Operational Efficiency
Bhakta, Shivam
The intersection of operational efficiency and sustainability is a pressing issue across industries seeking to modernize infrastructure while simultaneously addressing environmental concerns. A key challenge in the telecommunications sector is aging legacy equipment, like D4 Channel Banks, that draw significant electricity demand for a dwindling customer base. There is an opportunity to accelerate the decommissioning of these devices, and thereby lower electricity demand, by consolidating underutilized equipment without sacrificing performance or investing significant capital.&#13;
This study introduces a robust optimization framework using integer linear programming to navigate the complex trade-offs between maintaining operational integrity and decommissioning excess physical capacity at a representative Verizon central office. This study adopted this technical approach because it is compatible with the binary nature of decommissioning decisions, under financial and practical constraints. Employing the Gurobi optimizer within a Python environment, the model's discrete optimization capabilities were essential in evaluating the decision to retire or maintain individual components of a network infrastructure.&#13;
The findings illustrate a compelling pathway to diminish the footprint of a representative central office's D4 Channel Banks by up to 40.8%, translating into annual operational cost savings ranging between $16,000 and $41,000. This reduction is primarily attributed to the decreased electricity demand and consequent lowering of CO2e emissions by 22,832 tons, underpinning the potential of such optimization strategies to harmonize the pursuit of operational excellence with environmental stewardship. Through the lens of decommissioning underutilized legacy equipment, this study underscores the strategic imperative of leveraging analytics to integrate sustainability into the operational fabric of the telecommunications industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate Change and Municipal Bond Ratings</title>
<link href="https://hdl.handle.net/1721.1/156037" rel="alternate"/>
<author>
<name>Zhang, Cindy</name>
</author>
<id>https://hdl.handle.net/1721.1/156037</id>
<updated>2024-08-13T03:19:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Climate Change and Municipal Bond Ratings
Zhang, Cindy
This paper examines whether climate change risks are incorporated into municipal bond ratings. In particular, I investigate whether municipalities with exposure to sea-level rise have lower bond ratings. Using a sample of rated bond issuances from 2011 to 2020, I docu- ment a negative relationship between bond ratings and climate risk for municipalities with exposure to sea-level rise. I also test whether there is a difference in ratings between coastal municipalities and a control group of non-coastal municipalities and find mixed results. My preliminary findings suggest that this risk is at least partially incorporated into bond ratings, however, the magnitude of the effect is small.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organizational Culture, Class Values, and Subordination at Work</title>
<link href="https://hdl.handle.net/1721.1/156036" rel="alternate"/>
<author>
<name>Zhang, Victoria Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/156036</id>
<updated>2024-08-13T03:07:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Organizational Culture, Class Values, and Subordination at Work
Zhang, Victoria Y.
Using data from online job reviews, I document a gap between blue- and white-collar workers’ evaluations of organizational culture and assess the role of two competing explanations. Worker values– or shared beliefs about what workplace culture should beare commonly thought to influence evaluations o f culture, encouraging organizations to recruit workers based upon “cultural fit.” I n contrast, workers may not differ on th e values that they appreciate, and instead may evaluate companies based on experiences of subordination in the workplace. Contrary to class values theories – which assume that differences in workers’ values drive differences in cultural evaluations– I find that blue- and white-collar workers largely agree about the extent to which they find company culture satisfying and about which aspects of those cultures they find satisfying. Conversely, 40-60% of the class gap can be explained by experienced subordination, which is widely seen as a negative element of culture, but unequally distributed by class. Workplaces with more blue-collar workers have more experiences of subordination, characterizing negative relationships of supervision, disrespect, and favoritism. is It the distribution of relationships of subordination, rather than differing class values, which explain class differences in evaluations of organizational culture.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How can Market Making Aid in Developing Financial Markets</title>
<link href="https://hdl.handle.net/1721.1/156035" rel="alternate"/>
<author>
<name>Imbern, Enrique Marcos Müller</name>
</author>
<id>https://hdl.handle.net/1721.1/156035</id>
<updated>2024-08-13T03:20:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How can Market Making Aid in Developing Financial Markets
Imbern, Enrique Marcos Müller
This thesis project examines the crucial role that market makers can play in enhancing liquidity and aiding the development of financial markets, with a particular focus on emerging economies. Market making involves providing buy and sell quotes for financial securities, thereby facilitating trading and contributing to market liquidity. The project delves into the evolution of financial markets, tracing their origins from basic commodity trading to the establishment of formal stock exchanges and the subsequent advancements driven by technological innovations. The analysis highlights the indispensable function of market makers in global financial markets, emphasizing their contributions to liquidity provision, price discovery, and market stability. The project underscores the specific challenges faced by emerging financial markets, such as limited liquidity, high volatility, and underdeveloped regulatory frameworks. It explores the case of Argentina as a representative example, discussing the impact of its 2001 debt default on the country's financial system and the subsequent need for market revitalization. The project presents a detailed analysis of liquidity-enhancing strategies employed by various emerging markets, drawing insights from case studies across different regions. It examines the initiatives undertaken by exchanges and regulators to broaden investor participation, promote financial literacy, reform corporate governance standards, and invest in advanced trading technologies. The transformative potential of market making in emerging markets is highlighted, focusing on its ability to enhance liquidity, reduce information asymmetry, provide market consensus, stabilize prices, facilitate economic growth, and bridge the gap with developed markets through technological adoption. The project delves into the critical importance of fostering a diverse investor base, both domestic and international, and the role of market makers in attracting and retaining investors. It discusses strategies for expanding product offerings, such as the introduction of exchange-traded funds (ETFs) and derivatives, as well as the creation of regional market linkages to increase liquidity and investment opportunities. 2 Furthermore, the project emphasizes the need for an enabling market environment, encompassing factors such as advanced trading infrastructure, efficient pre- and post-trade processes, reliable market data, and appropriate regulatory frameworks. It explores the incentives and compensation models for market makers, examining the various schemes employed globally. In conclusion, the thesis project presents a comprehensive analysis of the challenges faced by emerging financial markets and the pivotal role that market makers can play in addressing these challenges. By enhancing liquidity, promoting market efficiency, and fostering investor confidence, market makers have the potential to catalyze the development and growth of emerging financial markets, ultimately contributing to economic prosperity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Minimizing Total Delivered Cost of Stamped Assemblies Through Sourcing Optimization</title>
<link href="https://hdl.handle.net/1721.1/156034" rel="alternate"/>
<author>
<name>Francis, Branden</name>
</author>
<id>https://hdl.handle.net/1721.1/156034</id>
<updated>2024-08-13T03:13:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Minimizing Total Delivered Cost of Stamped Assemblies Through Sourcing Optimization
Francis, Branden
This thesis presents an optimization model for identifying alternate and cost-competitive assembly sourcing strategies in the automotive industry, focusing on the "Make vs. Buy" decision-making process for a multinational automotive OEM. A “Make vs. Buy” process evaluates the strategic benefits and cost advantages derived from in-sourcing or out-sourcing a production process. Typically, one in-source scenario is evaluated, but capacity constraints may limit the opportunity to in-source. To combat capacity constraints, the optimization model was developed to evaluate sourcing production processes from other plants within the OEM’s manufacturing network. The sourcing strategy evaluates production scenarios for multi-process stamped assemblies undergo. Utilizing a mixed integer programming framework derived from the knapsack problem, the model evaluates all production scenarios to minimize total costs while adhering to capacity and capability constraints. Results demonstrate the model's effectiveness in identifying cost-saving and alternate sourcing strategies. Future work may explore extending the model to encompass broader geographical and operational complexities within the automotive sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adaptive Intuitions in Complex Media Environments Shape Belief in Misinformation</title>
<link href="https://hdl.handle.net/1721.1/156033" rel="alternate"/>
<author>
<name>Orchinik, Reed</name>
</author>
<id>https://hdl.handle.net/1721.1/156033</id>
<updated>2024-08-13T03:43:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Adaptive Intuitions in Complex Media Environments Shape Belief in Misinformation
Orchinik, Reed
Belief in misinformation has been linked in part to digital media environments promoting reliance on intuition -- which in turn has been shown to increase belief in falsehoods. Here, I propose that this apparently irrational behavior may actually result from ecologically rational adaptations to complex environments. In a large survey experiment, I test whether intuitive belief in misinformation may result from these rational adaptations by randomizing participants to be shown either a largely true or largely false news feed. I show that individuals make more frequent and quicker errors on the less common headline type, and less frequent errors on the more common headline type. After seeing many true headlines, a participant is more likely to misidentify a subsequent false headline as true, and vice versa after seeing many false headlines. This pattern is consistent with adaptation to the proportion of true and false content (the veracity base rate).  I use computational modeling to show that these differences are driven by intuitions, which correspond to Bayesian priors, about the veracity of the content -- intuitions which then spill over into new environments. The results, when paired with the observation that the news consumed by most Americans is overwhelmingly true, suggest that belief in misinformation and the intuitions that underlie it are not necessarily a failing of humans in digital environments but can be a byproduct of rational adaptations to them.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harvesting Innovation: Exploring the Potential of an AI-Enabled Platform to Revolutionize Agricultural Labor Markets</title>
<link href="https://hdl.handle.net/1721.1/156032" rel="alternate"/>
<author>
<name>Haywood, Eric Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/156032</id>
<updated>2024-08-13T04:01:24Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Harvesting Innovation: Exploring the Potential of an AI-Enabled Platform to Revolutionize Agricultural Labor Markets
Haywood, Eric Robert
Labor shortages in agriculture are a global problem that cause revenue losses and resource waste. In the US, immigration is the main source of such laborers, but labor immigration has decreased by 75% in recent years and costs due to unharvested crops due to labor shortage in  agriculture were estimated at USD 3.1 Billion per year in 2014.&#13;
&#13;
This thesis investigates the persistent labor shortages in the agricultural sectors of the southern United States and Mexico, exploring the feasibility of alleviating these shortages through a labor matching platform enhanced by artificial intelligence (AI). With a focus on the economic implications and structural deficiencies in agricultural labor markets, the study examines how a digital platform can bridge the gap between supply and demand for agricultural labor. &#13;
&#13;
The research employs a multi-dimensional approach that includes extensive literature review, in-depth interviews with stakeholders, system dynamics modeling, and action research involving the launch of a company and release of a Minimum Viable Product (MVP). The MVP, a foundational component of the proposed digital platform, has been tested in the market to gather quantitative data and insights using web advertising. &#13;
&#13;
The findings highlight the platform’s potential to streamline labor matching processes, improve transparency, and increase efficiency in the agricultural labor market. Additionally, the integration of AI provides intelligent matchmaking capabilities, predicting and aligning labor needs with available workers more effectively. &#13;
&#13;
Not only does this thesis provide a potential business model to tackle a critical economic problem, but it also contributes to the broader discourse on the role of technology in transforming traditional industries in advanced and emerging economies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Private to Public: Why Do Alternative Asset Managers Go Public?</title>
<link href="https://hdl.handle.net/1721.1/156031" rel="alternate"/>
<author>
<name>Chen, Qiwei</name>
</author>
<id>https://hdl.handle.net/1721.1/156031</id>
<updated>2024-08-13T03:56:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Private to Public: Why Do Alternative Asset Managers Go Public?
Chen, Qiwei
The global alternative asset management industry has witnessed a significant trend of firms going public since the IPO of Blackstone in 2007, and the trend has come back recently. Since 2022, firms like PAG, Tiantu Capital, and CVC Capital Partners have announced or completed their plans to go public. This study summarizes the post-2000 waves of alternative asset managers going public, including their different pathways and post-IPO developments.   Utilizing a multi-case analysis method with public information, this study examines the motives, benefits, and costs associated with alternative asset managers’ decisions to go public.  Four primary motives and benefits of alternative asset managers going public are identified: (1) enabling founders and strategic investors to liquidate their holdings, (2) incentivizing employees through equity-based compensation, (3) providing permanent capital to fund organic growth and external acquisitions, and (4) enhancing brand and reputation. Although this study acknowledges the costs and potential disadvantages associated with going public, they are deemed less significant compared to the benefits.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Complexity Drives Long Lead Times: A Queueing Theory Space Industry Application</title>
<link href="https://hdl.handle.net/1721.1/156030" rel="alternate"/>
<author>
<name>Murga, Blanca</name>
</author>
<id>https://hdl.handle.net/1721.1/156030</id>
<updated>2024-08-13T03:42:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Complexity Drives Long Lead Times: A Queueing Theory Space Industry Application
Murga, Blanca
The space industry is going through a major transformation. In today’s space industry, commercial companies compete in the market for customers and resources; no longer the exclusive domain of government agencies and legacy aerospace giants. Companies are forced to develop new technology and find ways to reduce costs in ways not seen before. A major challenge for the industry is to produce complex reusable systems at higher rates and lower costs.&#13;
Many aerospace companies produce their components in high-mix, low-volume operations known as job shops. Job shops are notorious for having long lead times. The research for this thesis was conducted at a manufacturing site at Blue Origin, a privately held space company, that operates as a job shop. The purpose was to identify the sources for the long lead times observed in the production of machined components.&#13;
The hypothesis the thesis investigates is that long lead times are the result of high variability caused by the complexity of producing space components. Using the method proposed by Factory Physics and queueing theory, this thesis demonstrates via case studies and a queueing simulation that high variability drives long wait times leading to the long lead times experienced in job shop operations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Transformation Trends within Automobile Supply Chains in the Post-Pandemic Era</title>
<link href="https://hdl.handle.net/1721.1/156029" rel="alternate"/>
<author>
<name>Dong, Wenzhe</name>
</author>
<id>https://hdl.handle.net/1721.1/156029</id>
<updated>2024-08-13T03:17:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Strategic Transformation Trends within Automobile Supply Chains in the Post-Pandemic Era
Dong, Wenzhe
This research delved into the transformation of supply chain strategies among automobile original equipment manufacturers (OEMs) in the post-pandemic era, motivated by the disruptions faced during the COVID-19 pandemic. This study employed qualitative research methods and conducted semi-structured interviews with employees from both supply chain and strategy functions in OEMs and suppliers. This study identified motivations for automobile supply chain strategy transformation, including the electrification trend, geopolitical events, and pandemic impacts, highlighting the need for agile and resilient supply chains. Driven by these factors, OEMs prioritized supply chain resilience through measures such as safety stock increases, dual-sourcing critical materials, and enhanced supplier collaboration. Organizational adaptations further bolstered these transformation initiatives, fostering flexibility and instilling a resilience-centric mindset. Furthermore, this study examined talent management issues and resistance to change as prominent obstacles in supply chain strategy transformation and offered targeted recommendations. The findings provided actionable insights into emerging post-pandemic supply chain transformation trends, serving as a valuable resource for automotive OEMs, suppliers, policymakers, and scholars in shaping future strategies for automobile supply chains.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Supply Chain Resiliency through Solar Panel Delivery Optimization</title>
<link href="https://hdl.handle.net/1721.1/156028" rel="alternate"/>
<author>
<name>Ceballos Mondragón, Regina</name>
</author>
<id>https://hdl.handle.net/1721.1/156028</id>
<updated>2024-08-13T03:02:22Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Improving Supply Chain Resiliency through Solar Panel Delivery Optimization
Ceballos Mondragón, Regina
Following NextEra Energy Resources’ accelerated growth and disruptions in the solar panel supply chain, their solar panel allocation process is becoming more complex. This process results in a schedule that determines when to deliver close to 150 million solar panels to more than fifty project sites under development and construction, while balancing requirements from multiple stakeholders. Due to project and contract interdependencies, modifying the equipment delivery schedule leads to costs that have consequential impacts. This thesis presents and implements a novel mixed integer programming model to determine the optimal schedule for delivering solar panels to project sites. The model abstracts impactful and quantifiable costs and minimizes them to propose a realistic solution. It produces a schedule in significantly less time than the current manual approach by finding a feasible solution in less than 15 minutes. The thesis introduces three scenarios of supply chain disruptions that mimic real-world events, demonstrating the model’s flexibility and helping NextEra Energy Resources adapt to future supply chain disruptions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Warehouse of The Future</title>
<link href="https://hdl.handle.net/1721.1/156027" rel="alternate"/>
<author>
<name>Severe, Stephanie</name>
</author>
<id>https://hdl.handle.net/1721.1/156027</id>
<updated>2024-08-13T03:24:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Warehouse of The Future
Severe, Stephanie
Competitive pressures require Amgen Rhode Island (ARI) warehouse to be as efficient and low cost as possible. An Outside Service Provider (OSP) has led operations within the warehouse since September 2021. The work is considered safe and compliant. However, there are many opportunities to mature their processes and make the work more efficient. The goal of this project is to support Amgen as it creates the warehouse of the future. ARI is targeting volume-based growth as it expands, aiming to increase by 130% its production of drug substances by 2026. Ensuring that the warehouse can support the site's long-term growth is key. 
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How can brands try to influence social norms?</title>
<link href="https://hdl.handle.net/1721.1/156026" rel="alternate"/>
<author>
<name>Robinet-Duffo, Richard</name>
</author>
<id>https://hdl.handle.net/1721.1/156026</id>
<updated>2024-08-13T03:59:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How can brands try to influence social norms?
Robinet-Duffo, Richard
This thesis explores how brands can influence social norms. Through detailed case studies of the American Tobacco Company, Nike, Patagonia, Yves Saint Laurent, and Viagra, it examines the various tactics employed by these brands to challenge and reshape societal perceptions and behaviors related to gender roles, athleticism, sustainable consumption, fashion, and sexual health. Key strategies include leveraging the brand's legitimacy and authenticity within its industry and previously set social norms, creating aspirational figures and personas, employing multifaceted messaging across visual, auditory, and semantic channels, and targeting individual consumers to effect collective change. The thesis also explores the ethical implications of brand influence on social norms, acknowledging the potential for both positive social change and the promotion of harmful behaviors. Ultimately, this research argues that while brands can indeed ride the waves of existing social trends, they also possess the power to actively shape the direction and pace of norm evolution through their actions and messaging. As such, the thesis underscores the importance of critical reflection on the role of brands in society and the need for responsible wielding of their influence.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Establishing Inventory Maturity in a Make-To-Order Manufacturing Environment</title>
<link href="https://hdl.handle.net/1721.1/156025" rel="alternate"/>
<author>
<name>Vignaroli, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/156025</id>
<updated>2024-08-13T03:30:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Establishing Inventory Maturity in a Make-To-Order Manufacturing Environment
Vignaroli, Adam
Accelevation, LLC experienced rapid growth in their field of data center containment manufacturing. While sales and manufacturing grew quickly, the processes to support these did not mature at the same rate. The researcher reviewed the existing operating processes related to inventory management to understand the initial state of operations. Following the review of operations, the researcher diagnosed three major focus areas for improvement to mature Accelevation's operations. These areas of focus were analytical inventory management policies, comprehensive material demand forecasting, and the achievement of sustainably high inventory accuracy. Actions were taken in each of these areas, resulting in levels of success and improvement. The results in inventory policy and demand forecasting should prepare Accelevation for future growth with more robust processes. While the actions of the researcher yielded modest improvements in inventory accuracy, Accelevation must make major strides in operational execution to alleviate these problems and fully mature their inventory management processes.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Engineering Strategy for Reshoring</title>
<link href="https://hdl.handle.net/1721.1/156024" rel="alternate"/>
<author>
<name>Easley, Jack</name>
</author>
<id>https://hdl.handle.net/1721.1/156024</id>
<updated>2024-08-13T03:04:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Engineering Strategy for Reshoring
Easley, Jack
With the resiliency of extended global supply chains tested by COVID-19 and the increase in geopolitical risks driven by events such as armed-conflicts in various parts of the globe, the idea of reshoring manufacturing capabilities has gained momentum both in popular press and in studied business decisions. In theory, reshoring decisions may be based on either new grounds of competition, such as automation to achieve lower manufacturing costs, or desire to reduce risk exposure, such as moving away from sole-source and/or geographically distant suppliers. For domestic industrial businesses looking for new growth opportunities or to re-evaluate strategic sourcing decisions, there is interest to look broadly at what the prime candidates for reshoring are, and such analysis would be more useful when viewed in the context of established strategy frameworks.&#13;
&#13;
Recognizing the risks of over-reliance on offshore production, and seeing opportunities to support manufacturing with the latest breakthroughs in advanced technology such as in sustainability and mobility, Re:Build Manufacturing is a private company founded with the mission to help revitalize the American industrial base over the coming decades. Since 2020, the company has grown quickly through mergers and acquisitions, assembling a family of engineering and manufacturing businesses and mounting a platform of capabilities. As a part of the company's strategic goals, the topic of reshoring is front and center.&#13;
&#13;
This research, therefore, serves to inform strategic decision making for reshoring by taking a practical view on the subject through the lens of a company looking to grow domestic manufacturing -- Re:Build Manufacturing. The study performs detailed data analysis for reshoring opportunities and proposes a unified framework for assessing ones that look promising by comparing market intelligence and company strengths and capabilities. The approach builds an independent, data-driven model that addresses: out of everything that could be reshored or built, how should a company evaluate what to focus on from a technical and competitive landscape standpoint, at least to start with? Objective criteria for characteristics of "good" reshoring candidates is established based on literature review and pairing such guidance with application of competitive strategy frameworks. A simplified narrative would be that an ideal candidate to reshore should be one that has a big market or is considered advanced technology, exhibits rewarding financial risk/return profile, and is exposed to above average level of supply chain risks from offshore operations. The competitive strengths and goals of the company serve to bound the scope of product selection. Considering macro indicators, the thesis of the study centers on the creation of a new decision-support model for reshoring assessment, proposing that publicly available data may be leveraged to drive reshoring attractiveness assessments quickly, at scale, and at product-type level detail. Broadly speaking, the study steps through macro-economic data search and analysis, reshoring ranking model construction, company capabilities inventory, and synthesis of reshoring opportunities.&#13;
&#13;
Analysis on the results of the model suggests that, in aggregate and absent company unique considerations, the model provides a reasonable approximation of general reshoring attractiveness across product-types. Specifically, out of the 6 product-types selected for verification study, 67% stayed in relative ranking to each other under additional scrutiny. It is worth noting that, given macro-data as inputs, the model does not capture nuanced competitive information. As such, detailed case studies should dictate specific reshoring considerations. Further, true performance of the model will only become apparent over time as the extended life cycle of manufacturing decisions takes years to materialize. Nevertheless, the results here serve to offer a holistic starting point to help guide manufacturing businesses in both strategic positioning for product portfolio planning and opportunity screen in business scaling to inform and shape strategies in achieving long-term growth.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Intersection of Management Practices and Bottom of the Pyramid Economics in Developing Countries</title>
<link href="https://hdl.handle.net/1721.1/156023" rel="alternate"/>
<author>
<name>Lavda, Aliki</name>
</author>
<id>https://hdl.handle.net/1721.1/156023</id>
<updated>2024-08-13T03:17:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring the Intersection of Management Practices and Bottom of the Pyramid Economics in Developing Countries
Lavda, Aliki
This thesis explores the intersection of management practices and Bottom of the Pyramid (BoP) economics within developing countries. Utilizing a comprehensive case study approach, it evaluates several multinational corporations' ventures in these regions, focusing on their strategies to leverage BoP markets for both business innovation and socio-economic upliftment. The research highlights how these companies integrate core business strategies with local economic conditions to create value for both the company and the local communities. Through detailed analysis of ventures by companies such as DuPont, SC Johnson, VisionSpring, and Procter &amp; Gamble, the study identifies critical factors that influence the success or failure of BoP initiatives such as product-market fit, community integration, governance partnerships, and sustainable business model innovation. To critically assess these ventures, two theoritical frameworks are deployed: the Specified Analytical Criteria framework, emerged from existing BoP venture literature, and the Sustainability-Oriented Innovation framework, redesigned from sustainable business practices. Additionally, this study hypothesizes that ventures incorporating a dual-entity structure—combining for-profit and non-profit elements—may increase their chances of success by effectively balancing economic and social goals. This hypothesis is assessed through an in-depth case study of Sanergy Collaborative, a venture operating in Nairobi's informal settlements that transforms waste into valuable resources. By aligning empirical findings with theoretical insights, this work provides nuanced understandings of hybrid business models and offers refined models for future BoP ventures that aim to achieve scalable social impact alongside financial sustainability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging Digital Transformation Gaps in Southeast Asia</title>
<link href="https://hdl.handle.net/1721.1/156022" rel="alternate"/>
<author>
<name>Roman, Francisco Matthew Guevarra</name>
</author>
<id>https://hdl.handle.net/1721.1/156022</id>
<updated>2024-08-13T03:30:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Bridging Digital Transformation Gaps in Southeast Asia
Roman, Francisco Matthew Guevarra
This thesis explores the Digital Transformation Gaps in Southeast Asia, specifically focusing on the challenges faced by companies in the region when adopting Western-based business systems such as Enterprise Resource Planning (ERP) and Customer Relationship Management (CRM) systems. While these systems are technically equipped to meet the business requirements in Southeast Asia, there is a notable disconnect in their practical application due to distinct regional differences. These disparities include variations in business culture, workforce competency, geographical constraints, economies of scale, as well as distinct best practices and processes, and workforce power dynamics. The research methodologically examines case studies and interviews of Southeast Asian companies and their experiences with Western systems, highlighting the nuances that lead to inefficiencies and operational challenges. The study delves into the cultural and structural aspects of Southeast Asian business environments, contrasting them with Western models. It argues that the one-size-fits-all 4approach of Western business systems fails to accommodate these unique regional characteristics, leading to a digital transformation gap. This thesis proposes a framework for adapting Western business systems to better align with Southeast Asian contexts. It emphasizes the importance of localizing these systems to bridge the digital transformation gap, ensuring that they are not only technically sound but also culturally and operationally relevant. The conclusion offers strategic recommendations for companies and system developers, aimed at fostering more effective and sustainable digital transformations in Southeast Asia. This work contributes to the broader understanding of global digitalization, emphasizing the need for regional customization in global business solutions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unearthing Regulatory Influences on Climate Risk Adaptation: Exploring Asset Stranding and Regulatory Shortcomings in the US Housing Market</title>
<link href="https://hdl.handle.net/1721.1/156021" rel="alternate"/>
<author>
<name>Spiller, Matteo</name>
</author>
<id>https://hdl.handle.net/1721.1/156021</id>
<updated>2024-08-13T03:09:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Unearthing Regulatory Influences on Climate Risk Adaptation: Exploring Asset Stranding and Regulatory Shortcomings in the US Housing Market
Spiller, Matteo
Financial institutions have not yet exhaustively assessed the implications that ESG risk may pose to the financial industry despite anthropogenic temperature change being duly and scientifically described in the Paris Agreement in December 2015 (Schellnhuber et al., 2016). Both banks and insurance companies will be impacted, especially in their respective real estate portfolios, and for this reason, the current risk management practices should evolve in order to exhaustively embed these scenarios in their stress testing methodologies (Jung et al., 2021). On top of this, several studies identified robust evidence of long-term growth losses for both poor and rich countries driven by natural disasters. These future climate change implications have been estimated at roughly $9.7 trillion (Hsiang &amp; Jina, 2014). &#13;
&#13;
Economic growth and climate change are two closely interconnected variables whose interplay will become more and more important in the future since, as an example, higher temperatures considerably reduce economic growth (Dell et al., 2012) and political stability (Hsiang et al., 2013).&#13;
&#13;
This thesis delves into the complex regulatory frameworks that will shape the financial sector, seeking to understand how politics shape and influence resilience and sustainability while exposing financial institutions to a new set of risks (Buhr, 2016), such as stranded assets and new crisis scenarios that could undermine the stability of the entire financial system. As of today, the lack of unified definitions and consensus has led financial institutions to implement ad-hoc methodologies creating discrepancies among them and a lack of unified interpretability of the underlying results.&#13;
&#13;
Europe has made considerable improvements to its regulatory framework and is moving toward a homogenous regulatory landscape (Baumuller &amp; Grbenic, 2021). Meanwhile, US political discourse has slowed down the implementation of essential regulations which are required not only by financial institutions but also by multiple stakeholders including investors, regulatory bodies, local entities, and supranational organizations (Dunlap &amp; McCright, 2010). Numerous non-binding guidelines have emerged setting the stage for a more comprehensive and detailed Climate Act with similar magnitude to the Dodd-Frank Act. &#13;
&#13;
This study’s conclusion highlights the need for additional regulations and guidelines from supervisory authorities on top of recommending key approaches and areas of study not only for financial institutions but also for future research. As such, these will need to provide the foundation for the next regulatory developments considering both a systematic shift toward a low-carbon economy and a delayed abrupt transition to mitigate the potential implications that could undermine financial stability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building Inventory Simulations for High Velocity Garment Retail Stores</title>
<link href="https://hdl.handle.net/1721.1/156020" rel="alternate"/>
<author>
<name>Qi, Davy</name>
</author>
<id>https://hdl.handle.net/1721.1/156020</id>
<updated>2024-08-13T03:25:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Building Inventory Simulations for High Velocity Garment Retail Stores
Qi, Davy
To facilitate agility in store inventory planning for a brick-and-mortar retail business with high sales velocity and product portfolio complexity, this project created a Monte Carlo tool that simulates how upstream shipment decisions impact capacity utilization and product complexity. The simulation model was built in two steps, first a Monte Carlo model for aggregated store inventory, followed by machine learning models that predict the display inventory and the number of store and display unique articles based on Monte Carlo outputs. In the process of building the Monte Carlo model, the project examined methods to model inventory trends, developed a quantification technique for daily demand stochasticity, and explored possibilities to control the simulation stochasticity. These methods and techniques, novel to retail inventory modeling, were able to model store inventory with little systematic biases and store daily mean absolute inventory deviations within 2-4%. Meanwhile for the machine learning models, the project systematically examined the efficacy of linear regression, tree and fully connected neural network models at making time series predictions using two time series as inputs. It also rigorously dives into the limitations and advantages of various&#13;
model architectures, including the selection of variables, treatment of multiple time series, order of predictions, and the scope of loss functions. The final machine learning model results showed some systematic biases with daily mean absolute deviation&#13;
ranging from 3-10% for display inventory and up to 10-20% for unique articles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging digital tools and analytics for temperature management in cold chain systems for gene therapies</title>
<link href="https://hdl.handle.net/1721.1/156019" rel="alternate"/>
<author>
<name>Lee, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/156019</id>
<updated>2024-08-13T03:08:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Leveraging digital tools and analytics for temperature management in cold chain systems for gene therapies
Lee, Jessica
Emerging advanced therapies at Johnson &amp; Johnson Innovative Medicine, such as a new retina gene therapy, require maintaining ultra low temperatures within the cold supply chain from the manufacturing plant and throughout distribution to the customer. In comparison to traditional cold chain medicines such as most vaccines, gene therapies are high-value, low-volume products and assurance of the product quality requires visibility into the full time-temperature history. This thesis describes the requirements for an end-to-end, digitally-enabled temperature management system for gene therapies. First, we establish a baseline understanding of the location, incidence, and severity of temperature excursions across the cold chain, based on current practices managing traditional drugs, through descriptive statistics on real-time temperature data, historical excursion records, and product complaints. While J&amp;J has digital temperature monitoring solutions in place today, tracing the temperature history of a product across multiple legs of the supply chain, as required for a gene therapy, has to be done through manual review of disparate temperature records. To fill this gap in the existing infrastructure, we define the requirements for integrating temperature data across 6 enterprise data systems, including sensor data, ERP systems for shipments and warehouse management, and serialization records. Lastly, we build a Monte Carlo simulation to inform performance requirements for the system by modeling the trade-offs in system reliability and the cost of product loss.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-Economic Analysis of Line Haul and Switcher Locomotive Propulsion by Diesel, Battery, and Hydrogen Fuel Cell Technologies</title>
<link href="https://hdl.handle.net/1721.1/156018" rel="alternate"/>
<author>
<name>Lerman, Benjamin D.</name>
</author>
<id>https://hdl.handle.net/1721.1/156018</id>
<updated>2024-08-13T03:29:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Techno-Economic Analysis of Line Haul and Switcher Locomotive Propulsion by Diesel, Battery, and Hydrogen Fuel Cell Technologies
Lerman, Benjamin D.
This thesis examines the critical challenges of reducing greenhouse gas (GHG) emissions within the freight rail industry of the United States transportation sector. The transportation sector, being a significant contributor to the nation’s GHG emissions, requires urgent attention to mitigate environmental and public health impacts. This thesis presents the emissions profile of the U.S. freight rail system and explores potential strategies for decarbonization. Previous research has established the freight rail system as a relatively more efficient mode of cargo transport in terms of emissions; however, to attain national goals set for GHG emissions, further reduction of its carbon footprint is required. Through a detailed analysis of the current propulsion technologies, ranging from conventional diesel-electric locomotives to emerging alternatives such as battery electric, hydrogen fuel cell, and electrified rail, the paper evaluates their potential to reduce emissions within the freight rail sector. The use of a Total Cost of Ownership (TCO) and Environmental Impact Analysis quantifies the financial and environmental implications of adopting these technologies. The findings reveal significant opportunities for reducing GHG emissions through the adoption of cleaner propulsion technologies. Challenges associated with their implementation include infrastructure requirements and technological readiness. A strategic roadmap for the decarbonization of freight rail is proposed, segmented into short-term (0-5 years), medium-term (5-15 years), and long-term (15+ years) objectives. Emphasis is placed on the importance of regulatory frameworks, technological advancements, and stakeholder collaboration in achieving a sustainable transition. The study aims to inform policymakers, industry stakeholders, and researchers about the pathways towards a sustainable and efficient freight rail system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Strategic Framework for Evaluating Next-Generation Technologies in Biocatalysis</title>
<link href="https://hdl.handle.net/1721.1/156017" rel="alternate"/>
<author>
<name>Creta, Alec</name>
</author>
<id>https://hdl.handle.net/1721.1/156017</id>
<updated>2024-08-13T03:47:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Strategic Framework for Evaluating Next-Generation Technologies in Biocatalysis
Creta, Alec
The emergence of a new wave of biocatalysis innovation is rapidly transforming the pharmaceutical industry. This next generation of techniques, characterized by metagenomics, artificial intelligence, and computational modeling, is reshaping approaches to process development for companies operating in this space. However, significant challenges exist in fully harnessing the potential of this new technology due to limitations in internal capabilities, including time constraints and knowledge gaps. To overcome these obstacles and unlock the true growth potential of biocatalysis, pharmaceutical companies must strategically leverage external supply organizations to tap into the next wave of biocatalysis innovation and bridge its existing capability gaps.&#13;
This thesis proposes a comprehensive framework for the site selection of a next-generation technology contract development and manufacturing organization (CDMO) in biocatalysis. This framework adopts a tiered approach, with a primary focus on the use of real options analysis to facilitate quantitative decision-making in emerging technology site selection. Following the framework establishment, its application challenges the initial high-cost assumptions associated with emerging technology CDMOs, revealing a significant 20% reduction in expected costs. Overall, this de-risks the emerging technology investment and drives the implementation of novel and innovative processes in early-phase biocatalysis.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Mingling: Supporting Ad Hoc, Private Conversations at Virtual Conferences</title>
<link href="https://hdl.handle.net/1721.1/156016" rel="alternate"/>
<author>
<name>Song, Jaeyoon</name>
</author>
<id>https://hdl.handle.net/1721.1/156016</id>
<updated>2024-08-13T03:24:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Online Mingling: Supporting Ad Hoc, Private Conversations at Virtual Conferences
Song, Jaeyoon
Even though today’s videoconferencing systems are often very useful, these systems do not provide support for one of the most important aspects of in-person meetings: the ad hoc, private conversations that happen before, after, and during the breaks of scheduled events–the proverbial hallway conversations. Here we describe our design of a simple system, called Minglr, which supports this kind of interaction by facilitating the matching of conversational partners. We describe two studies of this system’s use at two virtual conferences with over 450 total participants. Our results provide evidence for the usefulness of this capability, showing that, for example, 81% of people who used the system successfully thought that future virtual conferences should include a tool with similar functionality. We believe that similar functionality is likely to be widely implemented in many videoconferencing systems and to increase the feasibility and desirability of many kinds of remote work and socializing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private Equity Secondaries: A Comparative Analysis on the US &#13;
and Chinese Markets</title>
<link href="https://hdl.handle.net/1721.1/156015" rel="alternate"/>
<author>
<name>Han, Weizong</name>
</author>
<id>https://hdl.handle.net/1721.1/156015</id>
<updated>2024-08-13T03:26:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Private Equity Secondaries: A Comparative Analysis on the US &#13;
and Chinese Markets
Han, Weizong
Since its inception in the early 2000s, China's private equity secondary market has evolved dramatically. Despite its fast growth, exit strategies in China’s private equity market lag, especially as tighter IPO regulations complicate exits, undermining liquidity and returns. In contrast, the U.S. benefits from a robust secondary market, underscoring its critical role in the maturity of the private equity industry.&#13;
This thesis employs a mixed-methods approach to explore the U.S. private equity secondary market's evolution and to assess the status and challenges of China's private equity secondary market. The U.S. market enjoys a well-established regulatory environment, professional intermediaries, and advanced trading platforms, contributing to its efficiency and liquidity. Conversely, China's market, despite its growth, grapples with regulatory insufficiencies, professional gaps, and opaque transactions. Enhancements to China's legal framework, professional services, and trading platform functionalities are proposed to foster market development and global integration, aiming to enrich both academic discourse and provide practical guidance for stakeholders.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Paths to Achieving Scope 1 Carbon Neutrality in Building Utilities</title>
<link href="https://hdl.handle.net/1721.1/156014" rel="alternate"/>
<author>
<name>Willette, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/156014</id>
<updated>2024-08-13T03:58:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Paths to Achieving Scope 1 Carbon Neutrality in Building Utilities
Willette, Daniel
Corporate entities are increasingly adopting sustainable practices and working towards climate commitments. This study seeks to provide a practical, actionable guide for corporations seeking to navigate the complexities of Scope 1 carbon abatement. Specifically, a framework is developed to determine paths to achieve Scope 1 carbon neutrality in building utilities for a large global biotechnology company. The framework combines data analysis, extensive stakeholder engagement, financial evaluation, and expert consultation with the application of optimization modeling. This multi-dimensional approach is designed to navigate the complex landscape of carbon abatement, identifying viable technologies and strategies that pave the way to achieving Scope 1 carbon neutrality while balancing operational efficiency, cost-effectiveness, and strategic priorities.&#13;
The research evaluates 11 categories of decarbonization solutions, encompassing energy efficiency measures, alternative and renewable energy sources, with an emphasis on their technical viability, implementation feasibility, and financial impacts. Through this assessment, the study zeroes in on 9 solutions deemed most appropriate for the biotechnology industry, incorporating them into an optimization model. This model serves as a strategic tool, guiding the selection of decarbonization projects and the appropriate volume of carbon offsets required to achieve carbon neutrality. The optimization model is a flexible platform for evaluating various scenarios and constraints, thereby facilitating informed decisions that align with a company’s environmental, financial, and strategic objectives.&#13;
The developed framework and insights can serve as a blueprint for other corporations grappling with similar challenges in reducing Scope 1 emissions from their building utilities. The research underscores the potential for significant environmental impact through the adoption of targeted decarbonization strategies, contributing to the broader goal of mitigating climate change.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process Digitalization: 3D Deep Learning in Manufacturing Applications</title>
<link href="https://hdl.handle.net/1721.1/156013" rel="alternate"/>
<author>
<name>Kochert, Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/156013</id>
<updated>2024-08-13T03:59:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Process Digitalization: 3D Deep Learning in Manufacturing Applications
Kochert, Ryan
The surge in artificial intelligence (AI) popularity and investment has significantly impacted various sectors, including automotive, aerospace, and defense. Smaller companies at the base of these supply chains often lack the resources and knowledge for AI implementation compared to larger original equipment manufacturers, creating a unique opportunity for these smaller companies to leverage AI for growth. However, many AI initiatives in these smaller firms stall at the prototyping phase. This research outlines, from planning to execution, steps and considerations for implementing an AI initiative at a small to medium sized manufacturing company. As well, given the importance of 3D data in the industry, the research also conducts a deep dive on working with, analyzing, and integrating 3D data into an AI model using various techniques, from statistical analysis to 3D deep learning. Discussion on the different&#13;
data representations including point clouds, voxels, polygon meshes, depth maps, and boundary representations, and their trade-offs help with the determination of which representation is best for different use-cases. Most of the techniques apply to various unstructured data types to enable multi-modal inputs to a descriptive, predictive, or prescriptive AI model. Additionally, beyond the technical requirements, an entire section is dedicated to discussing the human element in this whole process, focusing on a company’s personnel and cultural aspects, which is often where initiatives can succeed or fail.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Financial Inclusion In Sub-Saharan Africa :  A Multidimensional Index</title>
<link href="https://hdl.handle.net/1721.1/156012" rel="alternate"/>
<author>
<name>Diallo, Aïda Sadio</name>
</author>
<id>https://hdl.handle.net/1721.1/156012</id>
<updated>2024-08-13T04:02:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Financial Inclusion In Sub-Saharan Africa :  A Multidimensional Index
Diallo, Aïda Sadio
Financial inclusion has emerged as a crucial enabler for sustainable development, with significant implications for poverty reduction, economic growth, and gender equality. Despite the growing recognition of its importance, measuring financial inclusion remains a complex challenge, particularly in the context of Sub-Saharan Africa, where countries face unique challenges and opportunities. This thesis aims to contribute to the literature by developing a comprehensive, multidimensional financial inclusion index specifically tailored to the Sub-Saharan African context. Building upon previous methodologies, the index incorporates an expanded set of both demand-side and supply-side indicators across key dimensions of financial inclusion.  The insights generated by this research have important policy implications, providing a valuable tool for policymakers to diagnose bottlenecks, prioritize reforms, and track progress over time. By contributing to the evidence base on financial inclusion measurement and its implications, this thesis aims to support the development of more efficient, equitable, and inclusive financial systems across SubSaharan Africa.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Can We Promote Impact/ESG Investing? - Clarifying the Skeptical Reasons and Benefits of Addressing Impact &amp; ESG Investing in the Age of Artificial Intelligence</title>
<link href="https://hdl.handle.net/1721.1/156010" rel="alternate"/>
<author>
<name>Tamura, Yosuke</name>
</author>
<id>https://hdl.handle.net/1721.1/156010</id>
<updated>2024-08-13T03:04:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Can We Promote Impact/ESG Investing? - Clarifying the Skeptical Reasons and Benefits of Addressing Impact &amp; ESG Investing in the Age of Artificial Intelligence
Tamura, Yosuke
Drawing upon professional experiences in Impact/ESG consulting and investment, this thesis explores the efficacy of Impact and ESG investments in enhancing corporate value. Chapter 1 introduces the complex landscape of these investments, outlining common misconceptions and the diverse definitions that prevail across different stakeholders. Chapter 2 delves into the metrics and standards used to assess these investments, highlighting the confusion caused by multiple rating systems and the impact on stakeholder decisions. Chapter 3 presents an event study focusing on the stock market reactions to ESG ratings changes, revealing that while negative ratings significantly influence market behavior, positive changes do not. This suggests that investors primarily use ESG ratings for negative screening. Chapter 4 extends the discussion to the role of artificial intelligence (AI) in impact investment, assessing both its potential and risks within the context of future societal impacts. Chapter 5 explores the practical applications of impact investments, particularly how they can address global health challenges through initiatives like the Triple I. The conclusion synthesizes these insights, arguing for a redefinition of ESG and impact investment frameworks that align with corporate strategies. It proposes that blending these investments with robust business models and transparent metrics can lead to sustainable corporate growth and greater stakeholder satisfaction. This thesis provides a roadmap for companies and investors aiming to genuinely enhance corporate value and societal welfare through impact and ESG investment practices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resilient by Design: A Supply Chain Digitalization Journey</title>
<link href="https://hdl.handle.net/1721.1/156009" rel="alternate"/>
<author>
<name>Vela González, Carlos David</name>
</author>
<id>https://hdl.handle.net/1721.1/156009</id>
<updated>2024-08-13T03:34:06Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Resilient by Design: A Supply Chain Digitalization Journey
Vela González, Carlos David
In an era where supply chain disruptions have become increasingly relevant due to geopolitical and environmental factors, resilience has emerged as a critical focus for organizations worldwide. This is particularly true in the pharmaceutical sector, where ensuring an uninterrupted supply of medical products is not only a business necessity but also a moral imperative, given the direct impact on patients’ health and well-being.&#13;
&#13;
This thesis presents the development of a digital tool designed to enhance the resilience of AstraZeneca’s supply chain, employing a design thinking approach. The tool leverages simulation and business intelligence, providing a versatile platform for conducting stress tests and evaluating response mechanisms across a spectrum of scenarios. This capability is instrumental in refining business continuity plans and informing strategic decisions on disruption response and capacity investments.&#13;
&#13;
While the tool was initially conceived to address the specific needs of AstraZeneca, its architecture is inherently generic and modular. This deliberate design choice ensures that the tool can be seamlessly adapted and scaled for use across various industries, transcending the initial scope of application. Additionally, the tool lays a solid foundation for future developments in the realm of supply chain digital twins.&#13;
&#13;
The thesis also contributes a comprehensive framework for boosting supply chain resilience through the lens of digitalization. It offers a strategic blueprint that organizations can adopt to proactively navigate and mitigate the intricacies of global supply chain disruptions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oil &amp; Gas Regional Operations Electrification Estimations</title>
<link href="https://hdl.handle.net/1721.1/156008" rel="alternate"/>
<author>
<name>Cohen, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/156008</id>
<updated>2024-08-13T03:56:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Oil &amp; Gas Regional Operations Electrification Estimations
Cohen, Rebecca
Emissions from Petroleum and Natural Gas systems accounted for 11.7% of greenhouse gas emissions reported to the EPA in 2022. [23] Oil &amp; Gas companies in the United States are exploring ways to reduce their greenhouse gas footprint. One avenue being explored for emissions reductions is operation electrification. NextEra Energy is a leading renewable energy developer in the United States with a goal to help high emitting industries with their decarbonization goals. This paper explores the potential for NextEra Energy Resources, a segment of NextEra Energy, to support Oil &amp; Gas companies in their emission reduction efforts. This potential partnership is explored through estimating a specific opportunity size for Oil &amp; Gas operations electrification and identifying how NextEra Energy Resources can support these goals.&#13;
To develop an estimation for potential electric need Oil &amp; Gas combustion emissions were analyzed for specific industry segments in specific regions. This evaluation resulted in a megawatt opportunity for selected industry segments in selected regions. Within one selected region company specific opportunities were identified and wind mapping was conducted to determine the potential for wind development in the area. With the insights developed from this project NextEra Energy Resources can create a targeted approach for supporting Oil &amp; Gas companies as they pursue their emissions reduction goals.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic Comparison of Solar Racking Options to Decarbonize Florida Power &amp; Light’s System</title>
<link href="https://hdl.handle.net/1721.1/156007" rel="alternate"/>
<author>
<name>Aguiar, Marcelo</name>
</author>
<id>https://hdl.handle.net/1721.1/156007</id>
<updated>2024-08-13T03:32:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Economic Comparison of Solar Racking Options to Decarbonize Florida Power &amp; Light’s System
Aguiar, Marcelo
With the continued decline of the cost of solar photovoltaics, the importance of optimizing this resource in decarbonization efforts is increasing. In this thesis, we compare different solar racking options to minimize total system cost. We focus this analysis on the flat, tracking, and fixed racking options. We then estimate the Threshold Cost Ratio between each option and Tracking Solar, which dominates utility-scale solar projects in the U.S. For this analysis, we use the Florida Power &amp; Light (FPL) system as a case study, basing our study exclusively on publicly available information. Using the LCOE and a Capacity Expansion Model to compare the different racking options, we conclude that Flat Solar would be preferred to Tracking solar if its cost was 72-77% of the cost of Tracking Solar or lower. For Fixed Solar, this ratio is between 79-84%. Utilities can then use these ratios by estimating the expected Cost Ratio and comparing it to the Threshold Cost Ratio. For example, if FPL estimated that Flat Solar cost 70% of the cost of Tracking Solar per WDC, this analysis indicates that it should mostly build Flat Solar, but if it cost more than 77%, Tracking Solar would be preferred. In addition to lowering costs, evaluating other racking options can significantly reduce the total land needed for decarbonizing FPL since Tracking Solar is the racking option that needs the most land per unit of energy produced.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-imagining Drug Discovery with Quantum Computing: A Framework and Critical Benchmark Analysis for achieving Quantum Economic Advantage</title>
<link href="https://hdl.handle.net/1721.1/156006" rel="alternate"/>
<author>
<name>Galatsanos-Dueck, Johannes</name>
</author>
<id>https://hdl.handle.net/1721.1/156006</id>
<updated>2024-08-13T03:57:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Re-imagining Drug Discovery with Quantum Computing: A Framework and Critical Benchmark Analysis for achieving Quantum Economic Advantage
Galatsanos-Dueck, Johannes
Quantum computing’s (QC) promise of solving computationally hard problems has captured public attention and imagination, leading to significant private and public capital investments in recent years. At the same time, we are at the cusp of a biomedical revolution powered by computer-aided drug discovery (CADD). Drug discovery companies are rapidly transitioning to the use of artificial intelligence to expedite and enhance research and development. However, many of the classical AI use cases scale exponentially fast and face computational power ceilings. QC can potentially accelerate these processes by several orders of magnitude in the future. As such, an open question for drug discovery companies is when and how to adopt QC. &#13;
This thesis summarizes quantum CADD methods and useful applications in drug discovery. The current state and trajectory of quantum computing is critically analyzed based on multiple benchmarks and manufacturer roadmaps. Furthermore, 11 industry decision-makers were interviewed to identify the current behaviors of end customers in investing in QC. To answer the question of correct timing and sizing of investments for a drug discovery company, the concept of net quantum economic advantage is introduced, considering all direct and indirect costs and benefits. A framework for drug discovery companies to monitor and invest in QC to reach a net quantum economic advantage is provided. &#13;
The most useful QC algorithms of Quantum Phase Estimation and Quantum Machine Learning for CADD will provide practical value after &gt;2000 logical qubits and circuit sizes of &gt;1011 gates, a far cry from today’s performance of single-digit logical qubits. Based on manufacturer timelines, these benchmarks may be achieved in the mid-2030s. However, other use cases might become interesting in the next years, and preparing a company to take advantage of QC has a long lead time. As such, drug discovery companies should move to an active quantum monitoring phase soon.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Decarbonizing the Shipping industry through Innovative Technologies, Artificial Intelligence and New Regulations</title>
<link href="https://hdl.handle.net/1721.1/156005" rel="alternate"/>
<author>
<name>Sarantopoulos, Fotis</name>
</author>
<id>https://hdl.handle.net/1721.1/156005</id>
<updated>2024-08-13T03:02:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Decarbonizing the Shipping industry through Innovative Technologies, Artificial Intelligence and New Regulations
Sarantopoulos, Fotis
The shipping industry, responsible for approximately 3% of global CO2 emissions, plays a pivotal role in the global economy, handling over 90% of world trade. This thesis addresses the urgent need for decarbonization within the maritime sector by examining innovative technologies, regulatory frameworks, and the potential of artificial intelligence to enhance operational efficiencies. The research delves into various sustainable practices including the use of alternative fuels like ammonia, hydrogen, methanol, and biofuels, as well as advancements in onboard carbon capture and wind-assisted propulsion systems. Additionally, the study assesses the impact of AI in optimizing shipping routes, predictive maintenance, and energy management, which are pivotal in reducing emissions. By integrating technological innovation with stringent regulatory compliance, this thesis highlights the challenges and transformative potential of the maritime industry's journey towards sustainability. The findings suggest that while the path to decarbonization is fraught with complexity, strategic integration of technology and policy offers a viable route to reducing the maritime sector's environmental impact and leading global efforts in combating climate change.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scheduling in a High-Mix Low-Volume Job Shop</title>
<link href="https://hdl.handle.net/1721.1/156004" rel="alternate"/>
<author>
<name>Holmes, Nicholas J.</name>
</author>
<id>https://hdl.handle.net/1721.1/156004</id>
<updated>2024-08-13T03:15:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Scheduling in a High-Mix Low-Volume Job Shop
Holmes, Nicholas J.
This research explores the intricate challenges and strategies involved in the scheduling operations of high-mix low-volume manufacturing environments. It discusses the complexities of managing diverse production requirements while optimizing resource utilization and minimizing lead times. Through a thorough analysis of scheduling methodologies and use case studies, the research offers valuable insights into enhancing operational efficiency and meeting customer demands in a job shop manufacturing setting.&#13;
This project is still ongoing, as further research and implementation learnings have not been fully realized. However, the learnings and suggestions in this research can be used to achieve a more effective and efficient scheduling process in the job shop manufacturing setting.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inventory Optimization and Simulation Analysis for Supply Chain Disruption Events</title>
<link href="https://hdl.handle.net/1721.1/156003" rel="alternate"/>
<author>
<name>Kleinemolen, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/156003</id>
<updated>2024-08-13T03:45:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Inventory Optimization and Simulation Analysis for Supply Chain Disruption Events
Kleinemolen, Ian
Increasing volatility in the global supply chain following the Covid-19 pandemic has led to a challenge in reliably managing inventory, especially for high-complexity medical devices. An optimization and simulation-based inventory management model was developed to augment the decision making of supply planners in these networks. The model supports supply planners in safety stock allocation decisions by quantifying inventory cost and stockout probability risk for products with multi-stage, converging supply networks. Components of the model include iterative multi-echelon inventory optimization, monte carlo simulation of a custom base-stock inventory model and cycle service level modelling. An application of the model is explored in a case study of the J&amp;J Ethicon surgical stapler supply chain. In addition, operational considerations for implementing inventory models are discussed, including data architecture, standardization, and centralization for complex supply chains.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Roadmap for Last Mile Sustainability</title>
<link href="https://hdl.handle.net/1721.1/156002" rel="alternate"/>
<author>
<name>Vaidya, Sajiree Vivek</name>
</author>
<id>https://hdl.handle.net/1721.1/156002</id>
<updated>2024-08-13T04:01:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Data Roadmap for Last Mile Sustainability
Vaidya, Sajiree Vivek
The final leg of e-commerce deliveries, often referred to as the "last mile," carries a significant environmental impact. While carbon data analysis tools, such as, carbon emission forecasting tools offer meaningful insights into understanding and mitigating this impact, their effectiveness hinges on the quality, availability, and granularity of data. This research project proposes data recommendations for last-mile sustainability, acknowledging the nuances inherent in such initiatives. By integrating transportation-related metrics and operational data from delivery facilities, the project seeks to enhance the accuracy and availability of last mile carbon emission forecasts.&#13;
The research consists of three primary components: data source analysis, development of a carbon emission forecasting tool, and drafting last mile sustainability data recommendations. We developed tools for carbon data analysis to assess the impact of last mile activity variables and predict carbon emission using both process and business-level data. Through this approach, we aim to provide actionable insights to support sustainability efforts within the last mile delivery sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Prohibition to International Recognition: Key Factors Driving the Development of California as a Premier Wine Region</title>
<link href="https://hdl.handle.net/1721.1/156001" rel="alternate"/>
<author>
<name>Mathy, Anna</name>
</author>
<id>https://hdl.handle.net/1721.1/156001</id>
<updated>2024-08-13T03:42:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Prohibition to International Recognition: Key Factors Driving the Development of California as a Premier Wine Region
Mathy, Anna
This thesis explores the transformation of California's wine industry from its struggles following Prohibition to becoming a leading global wine producer by 1976. Focusing on the period between 1933 and 1976, the study examines the critical factors that contributed to the industry's revival and success. Key elements identified include the recreation of market demand, significant technical innovations, and marketing strategies that aligned with consumer preferences. By integrating case studies of influential stakeholders with business strategy literature, particularly on the dynamics of clusters and ecosystems, the analysis demonstrates how California's wine industry emerged as a cohesive and competitive cluster. The findings highlight the broader applicability of these strategies, suggesting how similar approaches can be employed in other regions aiming for transformative growth, while highlighting the limits of replicability. This research underscores the synergy between strategic marketing, technological advancement, and cluster development in revitalizing industries on a global scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>US Green Hydrogen Production: Strategic Approaches to Enhancing Economic Viability and Market Development</title>
<link href="https://hdl.handle.net/1721.1/156000" rel="alternate"/>
<author>
<name>Meehan, Brandon</name>
</author>
<id>https://hdl.handle.net/1721.1/156000</id>
<updated>2024-08-13T03:59:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">US Green Hydrogen Production: Strategic Approaches to Enhancing Economic Viability and Market Development
Meehan, Brandon
As the global imperative for sustainable energy solutions intensifies, green hydrogen emerges as a potential player in a sustainable energy future. This thesis explores the viability and economic landscape of green hydrogen production within the United States. It places emphasis on the pivotal role of renewable energy credit (REC) matching criteria and strategic operational adjustments in enhancing its economic feasibility. Through a detailed examination of the effects of hourly and annual REC matching, this study illuminates the complex interplay between public policy, business strategies, and the inherent variability of renewable energy sources.&#13;
&#13;
Central to this investigation is the assessment of two primary levers which may change the underlying economics of green hydrogen: REC matching criteria, which dictate the temporal alignment between renewable energy generation and hydrogen production, and strategic electrolyzer curtailment, a novel operational strategy designed to optimize the sale of both hydrogen and electricity. The analysis utilizes robust datasets including 4 years of hourly wind and solar resource availability in the U.S. at a 3km resolution, 4 years of hourly nodal power prices, and infrastructural cost data.&#13;
&#13;
The findings reveal significant regional disparities in the cost-effectiveness of green hydrogen production. The middle regions of the U.S., particularly Texas, emerge as optimal locations. These disparities are further nuanced by the chosen REC matching criteria, where less stringent annual matching notably reduces regional cost disparities by accommodating the variability of solar energy production. Moreover, strategic electrolyzer curtailment emerges as a critical mechanism for cost reduction, offering substantial savings, especially in regions characterized by high electricity price volatility.&#13;
&#13;
This research contributes to the burgeoning field of green hydrogen studies by providing a comprehensive analytical framework that integrates technical, economic, and policy dimensions. It offers actionable insights for policymakers and industry stakeholders, suggesting pathways to enhance the competitiveness of green hydrogen. By meticulously balancing the imperative of sustainability with economic considerations, this thesis charts a course towards establishing green hydrogen as a significant contributor to the hydrogen market, poised to catalyze a profound shift in the U.S. decarbonization effort.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Framework for Enhancing Decision-Making Capabilities in the Decarbonization of the Airline Industry</title>
<link href="https://hdl.handle.net/1721.1/155999" rel="alternate"/>
<author>
<name>Tsay, Allison Chang</name>
</author>
<id>https://hdl.handle.net/1721.1/155999</id>
<updated>2024-08-13T03:35:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Framework for Enhancing Decision-Making Capabilities in the Decarbonization of the Airline Industry
Tsay, Allison Chang
In 2008, the aviation industry became the first to adopt sector-wide sustainability targets, including carbon-neutral growth by 2020, a 50 percent reduction in net CO2 emissions by 2050 (relative to 2005 levels), and an annual improvement in fuel efficiency of 1.5 percent from 2009 through 2020.[6] Over a decade later, the path to net zero emissions has remained elusive and as of 2023, the industry has not made sufficient progress towards these targets, let alone new 2050 net-zero targets formulated in 2021. Tackling decarbonization in the aviation industry has proven to be challenging for various reasons: the industry faces obstacles such as long development cycles for commercial aircraft, the highly-regulated nature of the sector, and uncertainties in sustainable technology advancements. From an airline’s perspective, planning for fleet sustainability is an extremely unstructured problem, demanding flexibility and adaptability. Adding to the complexity is the intense competition in the airline industry- a dynamic which seldom offers decision-makers with the luxury of time. Effective planning for a sustainable future requires the interconnection and critical consideration of short-term, medium-term, and long-term goals. The question arises: How can we develop tools that offer adequate fidelity and granularity to enable airlines to plan for and execute on net zero goals? Sprague and Carlson (1982) define Decision Support Systems (DSS) broadly as interactive computer based systems that help decision-makers use data and models to solve ill- structured, unstructured or semi-structured problems. With the uncertainty of novel aircraft technologies, sustainable aviation fuel, and renewable energy sources, a DSS developed to support scenario analysis for airlines will prove valuable for airline executives, fleet planners, and inform OEMs of short-term and long-term aircraft needs. The Cascade Climate Impact Model is Boeing’s response to increasing industry demand for clarity on strategies to reduce aviation emissions. However, the underlying model focuses on macro-level analysis. In order to be considered a useful DSS for an airline stakeholder, Cascade will need to be further developed to provide granularity and fidelity sufficient for airline fleet planning and evolution decision-making. The project involves a thorough requirements development for a new version of Cascade tailored to support sustainable airline fleet planning. We delve into the specific needs and criteria that such a system must meet to effectively guide airlines in achieving their sustainability objectives. Then, a case study on a large capacity airline is conducted to evaluate the efficacy of the identified requirements. Furthermore, an analysis is undertaken to assess the current state of Cascade and the feasibility of implementing the requirements outlined for a sustainable airline fleet planning DSS. This evaluation aims to bridge the theoretical framework established through requirements analysis with the practical considerations of implementing such a system, with a specific focus on Boeing’s Cascade model. Through comparison of multiple fleet planning scenarios, the airline in question can remove up to 6 MtCO2 of future emissions by 2030; however, fleet evolution alone will not guarantee net zero emissions by 2050. Through analysis of current MOUs and SAF purchases, the airline is not on track to meet SAF uptake goals by 2030 and will need to reevaluate the current status of SAF purchase volumes with suppliers. Results from the case study indicate the capability of the newly-developed fleet planning workflows for Cascade to deliver actionable insights to airline decision-makers in their path towards decarbonization.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Democratizing Performance: Impact of The Data Revolution on Recreational Running</title>
<link href="https://hdl.handle.net/1721.1/155998" rel="alternate"/>
<author>
<name>Allouch, Maxime</name>
</author>
<id>https://hdl.handle.net/1721.1/155998</id>
<updated>2024-08-13T03:12:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Democratizing Performance: Impact of The Data Revolution on Recreational Running
Allouch, Maxime
In recent years, recreational running has experienced significant growth, with millions of individuals participating in the sport worldwide. This growth has highlighted the need for accessible and effective training tools and methodologies tailored specifically to recreational runners. While elite athletes benefit from high-end performance labs, personalized coaching, and advanced training camps, these resources are often too costly and specialized to be scalable for the average runner. This thesis investigates how recent innovations in wearable devices and data science can democratize access to such elite-level resources. Employing a critical analysis, this study examines the evolution, accuracy, and real-world application of such technologies through case studies and a comprehensive review of existing literature. Additionally, the thesis discusses future technological directions, exploring potential advancements and their implications for the recreational running community. We highlight the urgent need for rigorous and independent research to validate the efficacy of these innovations. It is crucial to quantify their impact on running performance and injury prevention, challenging the often overstated claims found in marketing materials. This research could enable runners to make more informed decisions about their training methods. By making high-quality training more accessible, we aim to improve both the performance and experience of runners at all levels.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Secrets of the Aluminati: Bottleneck Assessment within an Aluminum Rolling Mill</title>
<link href="https://hdl.handle.net/1721.1/155997" rel="alternate"/>
<author>
<name>Long, Evan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155997</id>
<updated>2024-08-13T04:01:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Secrets of the Aluminati: Bottleneck Assessment within an Aluminum Rolling Mill
Long, Evan C.
This work demonstrates how heuristic-based capacity estimation techniques were used to determine the utilization of the preheat process of Commonwealth Rolled Products (CRP), an aluminum rolling mill. CRP wished to evaluate its processes to determine whether its present capacity was compatible with its long-term strategic plan. The preheat process in particular presented a challenge because of its parallel work cells, interdependent finish times, and variable runtimes. This analysis was used to determine whether the present preheat plant could support the future state volume and product mix.&#13;
We will summarize the CRP process and the business circumstances before moving to the modeling approach that was used to solve this problem without relying on a high-fidelity simulation. Given the outputs of that model, we will conclude with next steps for CRP, including the operational levers used to ease the capacity situation following the capital decision.&#13;
Note that in order to protect company confidential information, sensitive values and information are masked in this document.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identification of The Steel Decarbonization Options for Different Regions</title>
<link href="https://hdl.handle.net/1721.1/155996" rel="alternate"/>
<author>
<name>Mai, Chao-Lun</name>
</author>
<id>https://hdl.handle.net/1721.1/155996</id>
<updated>2024-08-13T03:50:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Identification of The Steel Decarbonization Options for Different Regions
Mai, Chao-Lun
Iron and steel manufacturing stands as a leading contributor to global CO2 emissions and ranks as the second-largest energy consumer within heavy industries. Over the last decade, this industry alone has accounted for over 7% of global greenhouse gas emissions. Consequently, there is an urgent imperative to identify practical pathways for substantial decarbonization. This research endeavors to identify such pathways through comprehensive modeling. We evaluate the impact of technology replacement, fuel switching, and carbon capture and storage (CCS) on energy demand, costs, and emissions in crude steel production. The analysis is underpinned by two fundamental approaches: Techno-economic Analysis (TEA) and Life Cycle Analysis (LCA). Technology replacement explores alternatives such as state-of-the-art blast furnace-basic oxygen furnace (BF-BOF-SOA) and direct reduced iron with electric arc furnace (DRI-EAF) to replace the current blast furnace-basic oxygen furnace (BF-BOF) based on iron ores, as well as state-of-the-art electric arc furnace (EAFSOA) to replace the current electric arc furnace (EAF) based on recycled steels; fuel switching involves renewable electricity, renewable natural gas, biochar, and hydrogen; CCS options focus on mono-ethanol-amine (MEA) for BF-BOF based methods. Through this comprehensive analysis, the research aims to illuminate the most pragmatic and region-specific strategies for the deep decarbonization of the steel industry, making a critical contribution to addressing the urgent global need for sustainable steel production.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Product SKU Analysis, Rationalization, and Optimization</title>
<link href="https://hdl.handle.net/1721.1/155995" rel="alternate"/>
<author>
<name>Hatteberg, Heidi</name>
</author>
<id>https://hdl.handle.net/1721.1/155995</id>
<updated>2024-08-13T03:42:23Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Product SKU Analysis, Rationalization, and Optimization
Hatteberg, Heidi
Optimizing a portfolio product mix to balance customer demand and company strategy is an ongoing challenge for various industries, often resulting in product proliferation. LFM Capital, a private equity firm, acquired IronCraft, a tractor attachment company in Eastern Tennessee, in 2021. Since its initial founding in 2014, IronCraft has faced rapid growth and change which has challenged them to meet high market demands, resulting in a significantly large product portfolio of roughly 1,000,000 variations in Stock Keeping Units (SKUs) of both manufactured and sourced products. In addition to managing such a large portfolio, the current mix also increases variability and complexity to its manufacturing operations. This thesis employs IronCraft as a practical example to perform a SKU rationalization project using a deterministic model for strategic decision making which resulted in a “Standard Offerings List” of just 230 product offerings (130 manufactured by IronCraft and 100 sourced through its partner company, CID). When modeled alongside future orders for the 2024 pre-season, this list fulfilled 75% of those orders, proving that the pruned product mix can meet demand (assuming upselling of new products). These offerings were then a focus for production improvement and Lean/5S efforts to reduce safety hazards, reduce setup/changeover times by 80%, improve cycle times for assembly by 10-15%, establish a metric tracking system, and derive quality metrics from the existing system. All tools developed and implemented through this thesis were designed to drive productivity, growth, and integration within IronCraft.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data Driven Approach to Uncovering Energy Consumption&#13;
Reduction Opportunities Within Industrial Operations</title>
<link href="https://hdl.handle.net/1721.1/155994" rel="alternate"/>
<author>
<name>Correa Núñez, Juan Fernando</name>
</author>
<id>https://hdl.handle.net/1721.1/155994</id>
<updated>2024-08-13T03:11:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Data Driven Approach to Uncovering Energy Consumption&#13;
Reduction Opportunities Within Industrial Operations
Correa Núñez, Juan Fernando
Rising operating costs and environmental pressures are compelling industrial companies to reduce energy consumption without affecting output. Although various tools to identify energy reduction opportunities exist, they often fall short, being overly theoretical, too generic, or primarily focused on capital-intensive initiatives. Consequently, companies frequently end up relying on energy audits and benchmarks that yield minimal practical reductions. This thesis introduces a methodology designed to identify and implement operational changes that lead to energy reductions in industrial settings. By integrating data-driven analytics with continuous improvement principles, this methodology is able to uncover tangible operational improvements without substantial capital expenditure. Central to the proposed methodology is the identification of the core physical and operational principles of the system being analyzed to then develop a theoretical ideal operation against which to compare the current operation. This thesis also aims to describe the application of this framework at the pre-heating furnaces of Aluminum Duffel, an aluminum rolling mill in Duffel, Belgium, where it proved successful in reducing energy consumption by 23% within six months.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Wages and Employment: Do Minimum Wages Affect Management Practices?</title>
<link href="https://hdl.handle.net/1721.1/155993" rel="alternate"/>
<author>
<name>Tong, Di</name>
</author>
<id>https://hdl.handle.net/1721.1/155993</id>
<updated>2024-08-13T03:16:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Wages and Employment: Do Minimum Wages Affect Management Practices?
Tong, Di
Extensive research has examined the impact of minimum wages on employment. Yet less explored is whether and how these mandatory wage increases affect the broader spectrum of management practices and job quality. Compensating differentials theory posits that low-wage employers will diminish non-wage amenities to counteract the added labor costs. Conversely, the high-road strategy literature anticipates firms to enhance crucial aspects of job quality to optimize worker productivity. To assess these contrasting hypotheses, I used matched U.S. employee-employer job reviews and ratings to measure management quality in both general terms and across three specific dimensions: schedule quality, investment in employees (training, career opportunity, and relational investment), and employee input (autonomy and voice). I conducted difference-in-differences analyses based on multiple state-mandated minimum wage hikes spanning 2015-2021. The analyses show that as firms comply with mandates to raise wages, they, on average, neither compromise job quality in non-wage aspects nor undergo a thorough management system upgrade in the high-road direction. These findings align with organizational inertia theories and provide evidence of the barriers to high-road diffusion. Specifically, economic and policy pressure can be insufficient to cause strategic adoption of high-road employment systems. This study carries significant policy implications as the first comprehensive evaluation of minimum wage mandates on low-wage job quality. On one hand, it alleviates concerns regarding a negative spillover effect of mandatory wage increases on overall job quality. On the other hand, it highlights the limitations of minimum wage mandates in fostering systematic enhancements in working conditions beyond mere wage adjustments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Integrated Continuous Biomanufacturing Throughput: Resource Constraints and Process Scheduling</title>
<link href="https://hdl.handle.net/1721.1/155992" rel="alternate"/>
<author>
<name>Haddad, Ana</name>
</author>
<id>https://hdl.handle.net/1721.1/155992</id>
<updated>2024-08-13T03:27:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimizing Integrated Continuous Biomanufacturing Throughput: Resource Constraints and Process Scheduling
Haddad, Ana
The purpose of this thesis is to understand the available capacity of a biomanufacturing facility with respect to a product of interest. Further, the thesis aims to find opportunity to increase the system’s throughput and to determine whether current labor resources are sufficient to enable production at this level. To address these questions, the system will first be understood at a high level with preliminary analysis of its available capacity and of resource capacity utilization. A more robust available capacity analysis will then be performed by accounting for resource constraints. To this end, makespan minimization models will be created to evaluate optimal process scheduling given resource constraints. The analysis results showed that, at this time, labor is not a constraint to the system’s available capacity and that improvements to process scheduling can increase the system’s throughput at current labor levels. Finally, the thesis will evaluate new operating strategies, based on the new-found system understanding, which strive to decrease volatility of system throughput. The methods used in this thesis aim to cut through daily variability to understand fundamental production requirements. While this study was performed at a biomanufacturing facility, the methods are applicable to a wide range of industries.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Box Jumping: Portfolio Recompositions to Achieve Higher Morningstar Ratings</title>
<link href="https://hdl.handle.net/1721.1/155991" rel="alternate"/>
<author>
<name>Kim, David Sunghyo</name>
</author>
<id>https://hdl.handle.net/1721.1/155991</id>
<updated>2024-08-13T03:24:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Box Jumping: Portfolio Recompositions to Achieve Higher Morningstar Ratings
Kim, David Sunghyo
I show that actively-managed mutual funds often pursue higher Morningstar ratings at the expense of lower investment returns. Funds achieve higher ratings by changing their holdings to induce Morningstar to reclassify them into size/value style boxes with worse average performance. This practice, which I label `box jumping', sacrifices fund performance but nonetheless attracts large inflows of capital because funds are rated based on their relative performance within style boxes. Box jumping funds also take advantage by charging higher fees, which investors pay despite the ratings upgrade reversing on average within three years. These patterns emerge after 2002 when Morningstar ratings became based on relative performance within style boxes, and are predictably absent beforehand. I also show that pervasive box jumping creates negative spillover effects to other funds. Together, my findings highlight portfolio recomposition as a novel lever that funds employ to manipulate Morningstar ratings, and that funds box jump despite sacrificing returns because investors fixate on ratings when allocating capital.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Beyond Evergrande: Rethinking Real Estate Market in China and the USA for Long Term Growth and Sustainability</title>
<link href="https://hdl.handle.net/1721.1/155990" rel="alternate"/>
<author>
<name>Yang, Jing</name>
</author>
<id>https://hdl.handle.net/1721.1/155990</id>
<updated>2024-08-13T03:36:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Beyond Evergrande: Rethinking Real Estate Market in China and the USA for Long Term Growth and Sustainability
Yang, Jing
The Evergrande crisis has exposed vulnerabilities in China's real estate market, prompting a critical reassessment of financing models, investment strategies, and regulatory frameworks. This thesis conducts a comparative analysis of the real estate markets in China and the United States, drawing insights from the Evergrande debacle and the US subprime mortgage crisis. By examining the evolution of financing mechanisms, investment approaches, land-use policies, and socio-economic factors influencing demand and supply, the research offers a holistic understanding of challenges and opportunities. Through synthesizing lessons from crises in both markets, the study provides recommendations for stakeholders, addressing financing strategies, regulatory reforms, risk management practices, and the pursuit of long-term sustainability. The findings contribute to the discourse on sustainable real estate development, offering valuable guidance for informed decision-making and resilient strategies amidst evolving market conditions and future challenges.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Analytical Framework for Planogram Portfolio Optimization</title>
<link href="https://hdl.handle.net/1721.1/155989" rel="alternate"/>
<author>
<name>Habel, Mathew</name>
</author>
<id>https://hdl.handle.net/1721.1/155989</id>
<updated>2024-08-13T03:02:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Analytical Framework for Planogram Portfolio Optimization
Habel, Mathew
While considerable research has been conducted into the construction of optimal planograms (POGs) within a given store, the existing approaches have not been rigorously tested at scale across a network chain of retail stores. Moreover, current industry practices to design planograms are often ad hoc and anecdotal, and lead to proliferation of many different planograms that add complexity but not necessarily value. This thesis proposes an analytical framework for an end-to-end optimization of portfolios of planograms within Target, focusing on the optimal trade-off between planogram-store personalization and standardization. The study utilizes retail data from Target to develop mathematical frameworks partly based on machine learning and optimization techniques to address the challenge of managing planograms across Target’s national network chain of stores.&#13;
A four phase approach is proposed. Phase 1 develops a descriptive mathematical modeling framework that informs the identification of product categories for which reduction of POG design proliferation has promising potential. Phase 2 develops machine learning models to estimate revenue generation for any given POG design and Target store combination. Phase 3 estimates the performance of novel POG deployments in stores across Target’s network chain. Lastly, Phase 4 utilizes a knapsack formulation to find the optimal number of planograms within a category as measured by the expected revenue generation minus the planogram management costs.&#13;
This approach was assessed by applying it to the category of spice products on a 6 month time horizon and yields an estimated reduction in operational costs of 46%, which comes directly from reducing the total number of respective planogram designs active within Target’s store network. Moreover, the estimated revenue of the new planogram portfolio shows a 3% improvement over the existing, which is obtained by replacing the planogram designs in several stores by more favorable designs than the existing ones, which are assessed to generate higher sales. These results suggest the optimization approach can yield meaningful operational and cost savings across categories in the organization and improve the operating margin of Target.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning and Stochastic Simulation for Inventory Management</title>
<link href="https://hdl.handle.net/1721.1/155988" rel="alternate"/>
<author>
<name>Olaleye, Ololade</name>
</author>
<id>https://hdl.handle.net/1721.1/155988</id>
<updated>2024-08-13T03:41:18Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Machine Learning and Stochastic Simulation for Inventory Management
Olaleye, Ololade
This thesis explores the use of advanced data-driven techniques for dimensioning safety stock and optimizing inventory in a supply chain. The thesis is based on data and insights for raw material inventory at Amgen, a biotech company. Resilient inventory management is important in the biopharma and biotech sector as the repercussions of a drug shortage are dire. However, the complexity of biomanufacturing processes creates significant variability and uncertainty around lead times and demand. Amgen currently holds high raw material inventories across thousands of materials to mitigate risks of stockouts that could delay production. However, the policies of holding high raw material inventories in Amgen have resulted in increased holding costs and also tied up working capital. To address this challenge and find a sustainable method for managing raw materials in the company and by extension, other stages of production, a novel methodology is developed. Machine learning models such as CatBoost, Extreme Gradient Boosting (XGBoost) and Random Forest are proposed to forecast lead times and demand. The models are trained on datasets of 10,000+ materials, incorporating unique patterns based on factors like suppliers’ historical delivery performance, historical demand pattern and material characteristics. A segmentation framework is also developed to properly allocate service levels based on risk tolerance for different category of materials. Stochastic simulation then applies the learned predictive distributions to quantify optimal safety stock levels under uncertainties. This considers desired service levels, holding costs, risk tolerance, cost-risk tradeoffs and potential disruptions in what-if scenario cases to support resilience. The methodology is validated on sample materials with both short and long lead times. Results indicate potential inventory reductions of over 25% while still preventing stockouts, enabling multimillion dollar savings in procurement and holding costs. A phased implementation plan is also proposed in order to ensure smooth transition using this new data-driven approach in the organisation, taking into consideration change management. This solution fuses predictive analytics with simulation and optimization to transform safety stock calculation from a cost burden to a competitive advantage. The dynamic data-driven framework significantly enhances supply chain resilience and efficiency in the vitally important biopharmaceutical industry, where patient outcomes are at stake. The methodologies developed could be applied across various production stages and tailored to other sectors.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Breaking the Mold: Using Automated Design to Accelerate Composites Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155987" rel="alternate"/>
<author>
<name>Sweet, Mark</name>
</author>
<id>https://hdl.handle.net/1721.1/155987</id>
<updated>2024-08-13T03:31:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Breaking the Mold: Using Automated Design to Accelerate Composites Manufacturing
Sweet, Mark
Re:Build Manufacturing’s member company, Composite Resources (CR), is a thermoset composites manufacturer primarily serving aerospace and defense customers. Composites manufacturing is an industry primarily differentiated on quality and lead times. Customers are innovative and value rapid prototyping capabilities. CR is challenged by the difficulty of finding experienced labor and a high-mix, low-volume workflow with frequent, non-recurring, low-value-added engineering work. In pursuing aggressive growth goals, CR must decouple revenue growth from headcount. &#13;
&#13;
A significant component of lead time is the engineering design and toolpath generation for the molds used to manufacture the composite parts. This research seeks to automate mold design and toolpath generation, allowing CR to eliminate the labor bottleneck and establish short lead times as a competitive advantage. &#13;
&#13;
This research studied existing manual mold design and toolpath generation processes to distill the key engineering decisions. A tiered system was developed to characterize parts suitable for automation. Algorithms were developed that automated mold design and toolpath generation for 12% of CR’s historical parts. Automation is projected to decrease engineering mold design times by 87% and overall lead times by 33% for in-scope parts.&#13;
&#13;
Several areas for algorithm improvement are explored to increase the impact of design automation and further reduce lead times. Use cases for design automation are more broadly considered, and the implications for small manufacturers are explored.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Environmental Impact Assessment of 3D Printed&#13;
Medical Devices</title>
<link href="https://hdl.handle.net/1721.1/155986" rel="alternate"/>
<author>
<name>Gerzeghier, Abraham</name>
</author>
<id>https://hdl.handle.net/1721.1/155986</id>
<updated>2024-08-13T03:41:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Environmental Impact Assessment of 3D Printed&#13;
Medical Devices
Gerzeghier, Abraham
As the push to incorporate sustainable practices has found widespread adoption in the corporate world, much of the responsibility has fallen to those industries and companies that manufacture physical products. Stryker has set corporate-wide carbon neutral and 100% renewable energy goals requiring an adjustment of current processes and incorporating sustainability practices to processes in development. To that end, this thesis assesses the environmental impact of additive technologies at one of Stryker’s manufacturing facilities in the form of a numerical metric to give leadership a new way to incorporate environmental consequences into their decision-making. By mapping out the additive manufacturing processes at the company’s primary facility and incorporating a tool to model these processes, two main metrics were produced for three additive technologies versus traditional milling. First, the carbon footprint per part due to the raw material, production processes, and the consumed inputs was quantified. Second, the energy consumed using each manufacturing platform, from raw material extraction to finishing. In addition, a separate tool was developed to streamline the use of the model and increase adoption by the additive team. A case study was conducted using this tool on one of the company’s products and the results were compared to an external consultancy’s analysis. The discrepancies between the two analyses allow for future work in further customizing the tool’s parameters to mirror the specific conditions of the medical device facility.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling System Efficiency in Mixed-Model Assembly Lines</title>
<link href="https://hdl.handle.net/1721.1/155985" rel="alternate"/>
<author>
<name>Hoffman, Cameron</name>
</author>
<id>https://hdl.handle.net/1721.1/155985</id>
<updated>2024-08-13T03:48:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling System Efficiency in Mixed-Model Assembly Lines
Hoffman, Cameron
This thesis details the development of a system efficiency model at the Nissan Smyrna Vehicle Assembly Plant. System efficiency at Nissan is one measure of performance used to allocate new business to plants, and in pursuit of this new business, leaders at the Smyrna Plant maintain a continuous improvement culture where teams are regularly engaged in plant production improvement efforts.&#13;
&#13;
Production improvements at the Smyrna Plant typically focus on fault reduction and line balancing. These efforts leverage either vehicle or process data, but none incorporate both, as no combined data system exists. One can overcome this disconnect by generating an integrated model that links the production sequence with assembly jobs using vehicle model and feature relationships. What results is a repository of work content on produced vehicles containing real and ideal production times, which can be used to measure system efficiency. Creation of such a system greatly enhances existing capabilities to identify bottlenecks in the plant, to improve system health, and to optimize the production sequence.&#13;
&#13;
The completed research demonstrates the modeling capability to integrate product and process data and the use cases of such an integration in enhancing production improvements. The research also demonstrates how internal innovation can happen through the novel use of existing resources to unlock new capabilities. The recommendations focus on implementing the integrated system into stakeholder workflows, creating new data architectures to simplify data management and model development, and re-thinking plant performance models to incorporate current production data.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The study of ESG strategies  on the development and financial performance  of traditional energy enterprises using System Dynamics - A case study on one oil and gas company</title>
<link href="https://hdl.handle.net/1721.1/155984" rel="alternate"/>
<author>
<name>Wang, Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/155984</id>
<updated>2024-08-13T03:27:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The study of ESG strategies  on the development and financial performance  of traditional energy enterprises using System Dynamics - A case study on one oil and gas company
Wang, Wei
This thesis investigates the impact of Environmental, Social, and Governance (ESG) strategies on the development and financial performance of traditional energy companies, specifically focusing on the oil and gas industry. This study constructs a comprehensive simulation framework from System Dynamics modeling to analyze the dynamics of ESG strategies and company operations. The research applies this model to a case study of BP PLC, a major player in the oil and gas sector, to evaluate the effectiveness of ESG strategies on a realistic scale.&#13;
The model demonstrates that strategic implementation of ESG initiatives leads to improved environmental performance, operational efficiency, and financial performance. The findings suggest companies and policymakers to take firm and prompt actions in investing production efficiency and renewable energy to mitigate future regulatory risk and market uncertainty. This thesis contributes to the perspective of achieving sustainability goals while maintaining profitability. It also provides insights into the challenges and opportunities faced by these companies as they navigate the transition towards more sustainable landscapes.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Clustering of Similar Incident Tickets Using Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/155983" rel="alternate"/>
<author>
<name>Chen, Jackie</name>
</author>
<id>https://hdl.handle.net/1721.1/155983</id>
<updated>2024-08-13T03:25:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Clustering of Similar Incident Tickets Using Natural Language Processing
Chen, Jackie
As businesses increasingly rely on digital tools for operational efficiency and value creation, Software Asset Management (SAM) becomes an important business practice. This thesis explores the use of natural language processing (NLP) and clustering algorithms to identify recurring issues affecting software applications with the objectives to assess the technical health of applications and to identify opportunities to address software issues that repeatedly plague users. Using a dataset of incident tickets from a business unit of a pharmaceutical company, various machine learning models were designed and tested to identify recurring issues affecting the business' applications. Through a dashboard that visualizes the outputs of the models, the business is provided with insights into recurring issues affecting their digital tools. As validated through user feedback and visual inspection, the model outputs indicate promising results in the clustering of incident tickets, offering valuable insights to users to understand and address recurrent software problems. However, it is important to acknowledge the inherent challenges of unsupervised machine learning. While the results can help enhance business operations, caution is advised regarding the implications to users and the business when models produce unexpected results. This project is another example of the balance between leveraging machine learning for problem-solving and understanding the limitations of the models.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preemptive variation reduction in biologic drug substance manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155982" rel="alternate"/>
<author>
<name>Güereca Valdivia, Ismael</name>
</author>
<id>https://hdl.handle.net/1721.1/155982</id>
<updated>2024-08-13T03:35:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Preemptive variation reduction in biologic drug substance manufacturing
Güereca Valdivia, Ismael
Biomanufacturing processes are characterized by their high natural complexity and variability, which present significant challenges to achieving consistent product quality and operational efficiency. This thesis proposes that integrating digital twins and soft sensors into these processes can substantially improve decision-making workflows. By simulating the biomanufacturing process and enabling real-time monitoring and estimation of critical parameters, organizations can reduce emerging variations in their manufacturing process by being able to identify, predict, and mitigate emerging issues before they become disruptive.To validate this hypothesis, a digital twin for a generic Integrated Continuous Bioreactor (ICB) operation was developed based on first principles. Additionally, soft sensors were used for the real-time estimation of biomass concentration (a critical parameter in mammalian cell culture). Combining mechanistic and data-driven modeling approaches and leveraging historical production data from Sanofi’s operations, both approaches were built and tested, demonstrating their effectiveness in real-world scenarios. The results show the potential of these technologies in improving process monitoring and control. On the one hand, the digital twin of the ICB operation allowed for the simulation of various scenarios, which presents the opportunity to adjust the parameters to ensure adequate operating conditions. On the other hand, implementing soft sensors, utilizing multiple linear regression and Seasonal Autoregressive Integrated Moving-Average (SARIMAX) models accomplished precise real-time estimations of biomass concentration. Both results validate the optimization of large-scale biomanufacturing processes by highlighting the potential of digital twins and soft sensors in reducing variation and driving continuous improvement.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reimagining the Impact of Large Financial Deals: Toward a Holistic Evaluation of Economic and Societal Consequences</title>
<link href="https://hdl.handle.net/1721.1/155981" rel="alternate"/>
<author>
<name>Dupont, Apolline</name>
</author>
<id>https://hdl.handle.net/1721.1/155981</id>
<updated>2024-08-13T03:07:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reimagining the Impact of Large Financial Deals: Toward a Holistic Evaluation of Economic and Societal Consequences
Dupont, Apolline
This thesis reimagines the assessment of large financial deals, such as mergers and acquisitions (M&amp;A), by proposing a holistic evaluation framework that considers economic, societal, and environmental consequences. Traditionally, these deals have been assessed primarily based on financial metrics, overlooking their broader impact on stakeholders and sustainability.&#13;
&#13;
Through a mixed-method approach combining literature review and qualitative interviews with professionals, this research develops a theoretical framework integrating multiple dimensions into the analysis of M&amp;A deals. The framework is applied to a case study of the contentious merger between French utility giants Veolia and Suez, highlighting the complexities and trade-offs involved in evaluating deals in the water and waste management sector.&#13;
&#13;
The findings underscore the importance of comprehensive impact assessments, robust stakeholder engagement, and long-term value creation strategies. The Veolia Suez case reveals the need for effective risk management and the potential for synergies and unintended consequences in large financial deals.&#13;
Ultimately, this thesis argues that a holistic approach to impact assessment enables informed decision-making, promoting sustainable growth and safeguarding societal and environmental interests. The proposed framework offers a roadmap for enhancing practices and fostering a more responsible approach to financial transactions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating New Business Opportunities for Interregional Transmission</title>
<link href="https://hdl.handle.net/1721.1/155980" rel="alternate"/>
<author>
<name>Okoye, Don</name>
</author>
<id>https://hdl.handle.net/1721.1/155980</id>
<updated>2024-08-13T03:55:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluating New Business Opportunities for Interregional Transmission
Okoye, Don
The merchant transmission investment model is conducive to addressing interregional and long-range transmission needs as it provides a pathway to circumvent localized regional and state transmission planning processes and focus directly on interregional development. Furthermore, merchant transmission investments in accordance with comprehensive (multi-value) benefits planning provide a favorable benefit-to-cost ratio for transmission customers and support positive returns for investors. However, evaluating the comprehensive benefits of proposed transmission projects is computationally expensive and unfeasible to execute for early-stage, exploratory analysis of multiple projects. Therefore, this thesis focuses on the development and use of a computationally-reduced transmission business evaluation tool that heuristically evaluates critical components of comprehensive benefits and assesses merchant-based cost recovery viability of five interregional and long-range transmission projects on a forward-looking basis.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Site Selection, Vendor Evaluation, and Deployment of Nuclear Microreactors in Remote Mining Operations</title>
<link href="https://hdl.handle.net/1721.1/155979" rel="alternate"/>
<author>
<name>Chew Ming Chang, Matthew Dominic</name>
</author>
<id>https://hdl.handle.net/1721.1/155979</id>
<updated>2024-08-13T03:17:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Site Selection, Vendor Evaluation, and Deployment of Nuclear Microreactors in Remote Mining Operations
Chew Ming Chang, Matthew Dominic
This thesis presents a comprehensive framework for site selection and vendor selection for deploying nuclear microreactors in remote mining areas to facilitate decarbonization efforts. The methodology involves utilizing data gathered through various internal sources and employing software tools such as HOMER Pro Grid Optimization software and Python for analysis. The framework aims to optimize settings based on economic, carbon emissions, and capacity considerations by simulating various energy generation and storage components. The study also incorporates data from publicly available sources on micromodular reactor (MMR) companies to create MMR models for optimization calculations. Through a detailed analysis of simulated data and questionnaire scenarios, the framework evaluates factors such as power requirements, high temperature processes, charging stations, baseload size, peak electricity demand, peaking factor, proximity to town, and rail infrastructure. The proposed framework offers a systematic approach to identifying suitable pilot sites for MMRs in remote mining locations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative AI in Higher Education Academic Assignments: Policy Implications from a Systematic Review of Student and Teacher Perceptions</title>
<link href="https://hdl.handle.net/1721.1/155977" rel="alternate"/>
<author>
<name>Li, Zixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/155977</id>
<updated>2024-08-13T03:34:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Generative AI in Higher Education Academic Assignments: Policy Implications from a Systematic Review of Student and Teacher Perceptions
Li, Zixuan
This study systematically investigates students’ and teachers’ perceptions of using Generative AI in higher education assignments. Through a comprehensive systematic review of 37 papers, the study identifies common perspectives, differences, major ethical concerns, and the need for policy development and regulation. The systematic review reveals the potential benefits of AI tools, including improved efficiency and personalized learning experiences. However, it also highlights significant challenges and ethical concerns, such as the risk of academic dishonesty, over-reliance on technology, and the need for transparency in data processing and privacy. &#13;
&#13;
Additionally, a policy review is conducted to assess the extent to which policies at international, national, and institutional levels address the major ethical concerns identified in the systematic review. The study finds notable gaps between the significant ethical concerns perceived by students and teachers and the existing rules and guidance available. The UNESCO guidance provides valuable recommendations,  but national and institutional policies need further development to effectively address the unique challenges posed by AI in educational settings. &#13;
&#13;
The study underscores the importance of collaboration, capacity building, and ongoing evaluation in navigating the challenges of integrating Generative AI in higher education. Policymakers and educational institutions should prioritize providing training and support for educators, fostering a culture of academic integrity, and promoting the development of AI literacy skills. Future research should address the limitations identified in this review, such as conducting studies with larger, more diverse samples and employing longitudinal designs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hong Kong's Transformative Journey Under 'One Country, Two Systems': Processes, Trends and Reflections at the Midpoint</title>
<link href="https://hdl.handle.net/1721.1/155976" rel="alternate"/>
<author>
<name>Zhu, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/155976</id>
<updated>2024-08-13T03:09:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hong Kong's Transformative Journey Under 'One Country, Two Systems': Processes, Trends and Reflections at the Midpoint
Zhu, Rui
Since the 1997 handover, Hong Kong has reached the midpoint of its unprecedented constitutional experiment under the 'One Country, Two Systems' principle. In the twenty-six years since Hong Kong's return to China, the region has achieved remarkable success within this unique political framework. From the 1960s through today, Hong Kong has transformed into one of the wealthiest, most economically developed regions with the highest living standards worldwide. As Asia's financial center and a global hub for business, shipping, and trade, Hong Kong has made significant contributions to Asia's development and progress. However, despite its rise as a global financial powerhouse, Hong Kong faces numerous challenges. These include limited industrial diversification, a lack of technological innovation, the gradual erosion of civil liberties, and diminishing geopolitical neutrality amidst the escalating U.S.-China rivalry. The complex interplay of these factors poses significant risks to Hong Kong's long-term prosperity and stability.&#13;
&#13;
This thesis chronicles Hong Kong's transformative journey since 1997, examining key development trends and its current predicaments. It aims to capture the insights and lessons learned, providing a basis for thoughtful consideration of how to enhance Hong Kong's unique strengths and characteristics, address its vulnerabilities, and thereby launch a new phase in its developmental journey.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling Metal Additive Manufacturing from R&amp;D to Production</title>
<link href="https://hdl.handle.net/1721.1/155975" rel="alternate"/>
<author>
<name>Weißbach, Reimar</name>
</author>
<id>https://hdl.handle.net/1721.1/155975</id>
<updated>2024-08-13T03:28:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Scaling Metal Additive Manufacturing from R&amp;D to Production
Weißbach, Reimar
Metal additive manufacturing (AM) has been successfully commercialized, yet widespread adoption has not been achieved so far. This is partly because companies struggle to operate AM factories profitably and efficiently at industrial scale.&#13;
This thesis proposes a data strategy to address this challenge and support the rapid growth and successful operation of an additive manufacturing factory – from R&amp;D to production. The central idea is to connect relevant data to the central unit of a build. A build is proposed as one unit of manufacturing in AM. Connecting commercial data, information about geometry, processing, materials, post-processing, and testing to a build allows to gain a system-level understanding while also being able to dive into details where needed.&#13;
After implementation, the framework can be used to (i) qualify processes and certify materials, (ii) improve quoting quality and efficiency, (iii) support engineering and R&amp;D, (iv) derive critical operations KPIs such as revenue per build, builds per week, and days per build, which can be used for budgeting and capacity planning as well as business control, (v) make strategic decisions on capital expenses and headcount planning, as well as (iv) ensure traceability of materials and parts. Together, these applications support decision makers as well as commercial and technical staff in their work, both strategic as well as during day-to-day operations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating the Integration of Low-Volume, High-Mix Production Organizations</title>
<link href="https://hdl.handle.net/1721.1/155974" rel="alternate"/>
<author>
<name>Chacko, Priya</name>
</author>
<id>https://hdl.handle.net/1721.1/155974</id>
<updated>2024-08-13T03:50:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Accelerating the Integration of Low-Volume, High-Mix Production Organizations
Chacko, Priya
In private equity, the buy-and-build strategy may be used to perform horizontal acquisitions of targets that operate in the same industry and interact with similar customers and suppliers. This strategy increases the buyer’s market share in the field, diversifies its customer base, provides opportunities for the realization of synergies, and may even add new capabilities to its offerings. The consolidated platform company can then achieve a value that is significantly higher compared to that of the individual portfolio companies alone. This increased value from the combination of portfolio companies, however, is dependent on their successful integration into the platform company.&#13;
&#13;
This research investigates the unique challenges of aligning and integrating two independent production organizations that operate in the low-volume, high-mix (LVHM) metal fabrication sector. The research strategy used in this thesis begins with defining objectives and establishing the initial states of the portfolio companies. Then, a gap analysis and strategic benchmarking are performed to identify integration opportunities. Finally, proposals to accelerate integration in operations are provided: the first proposes increasing automation in production data management, and the second proposes a method to allocate indirect costs and better understand total costs during billing in the quote creation process.&#13;
&#13;
Though time and resource constraints prevented the proposed recommendations from being implemented during this research period, these recommendations have the potential for substantial positive impact on both platform and portfolio company operations. While the proposals are tailored to the organizations studied in this research, the broader concepts on which they are based suggest wider applicability to similar LVHM production environments. This thesis offers a framework for organizations to assess their initial and goal states, define objectives, and develop strategies to accelerate integration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance of the Private Equity industry during depressed Macroeconomic conditions</title>
<link href="https://hdl.handle.net/1721.1/155973" rel="alternate"/>
<author>
<name>Ginolhac, Gaspard</name>
</author>
<id>https://hdl.handle.net/1721.1/155973</id>
<updated>2024-08-13T03:11:08Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Performance of the Private Equity industry during depressed Macroeconomic conditions
Ginolhac, Gaspard
This thesis aims to understand the performance of private equity funds during economic crises and unfavorable macroeconomic conditions. To introduce the topic, I first review how private equity funds create value and how we can assess the performance of this industry. Then, I focus my analysis on the behavior of the industry during past economic crises to draw similarities with the current situation. Finally, using a large sample of private equity funds I do my own assessment of the performance of the industry and dig into what makes PE funds successful.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Semi-Automatic Nesting and Lean Problem Solving in a High-Mix, Low-Volume Production Environment</title>
<link href="https://hdl.handle.net/1721.1/155972" rel="alternate"/>
<author>
<name>Davis, G. Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/155972</id>
<updated>2024-08-13T03:44:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Semi-Automatic Nesting and Lean Problem Solving in a High-Mix, Low-Volume Production Environment
Davis, G. Alexander
Nesting in manufacturing involves arranging parts to be cut in the most efficient way possible to minimize the material left over after being cut. While many commercial software solutions have optimization algorithms that can do this efficiently, complex manufacturing processes in high-mix-low-volume (HMLV) environments make it difficult and time consuming software to implement the software. This paper describes a solution built for a HMLV company to automate significant portions of the nesting process while maintaining enough human input to deal with complexity, reducing their time to nest jobs by 83% and the time to re-nest jobs in the case of a production schedule change by 95%. We focused on using lean principles as a time-saving strategy rather than a direct cost cutting strategy in order to improve quality of life for operators while improving customer service. Initial iterations of the solution focused on complete automation of the nesting process with one click by the operator, but variability and complexity in the manufacturing system required a more semi-automatic solution that allowed for operator input but in a much easier and faster way than the initial state. This solution building is an example of using the A3 lean problem solving process to align stakeholders and rapidly experiment/iterate a solution until it achieves desired performance.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Greenhouse Gas Optimization Across a Multi-Echelon Manufacturing and Distribution Network</title>
<link href="https://hdl.handle.net/1721.1/155971" rel="alternate"/>
<author>
<name>Rosenzweig, Theo</name>
</author>
<id>https://hdl.handle.net/1721.1/155971</id>
<updated>2024-08-13T03:46:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Greenhouse Gas Optimization Across a Multi-Echelon Manufacturing and Distribution Network
Rosenzweig, Theo
Emissions from the industrial sector are a major contributor to climate change around the world. Many of these industrial emissions are attributable to the supply chain and will need to be drastically reduced to meet emission goals set forth by the United Nations Paris Agreement. Possibilities including renewable energy technologies for manufacturing and sustainable vehicles for transportation already exist and can help to reduce emissions across the supply chain, but few solutions have been evaluated regarding re-organizing supply chains as a whole to minimize carbon footprint. This thesis focuses on adapting sourcing strategies in a multi-echelon supply chain network to minimize Greenhouse Gas emissions. An approach using a multi-objective mixed-integer linear program that balances emission reduction along with other objectives such as sourcing cost, lead time, and supply risk is conducted to test the feasibility of the developed strategy in a business context. Opportunities for improvement of the model and possibilities for implementation in other organizations are evaluated.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Extracting Coronary Lesion Information from Angiogram Reports for Patient Screening Applications</title>
<link href="https://hdl.handle.net/1721.1/155970" rel="alternate"/>
<author>
<name>Gaffney, Leah Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/155970</id>
<updated>2024-08-13T03:16:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Extracting Coronary Lesion Information from Angiogram Reports for Patient Screening Applications
Gaffney, Leah Paige
Dramatic improvements in the management of heart disease over the past 60 years (&gt;70% reduction in mortality) may be plateauing, and there are challenges ahead for achieving cardiovascular health objectives, with heart disease still the leading cause of death in the US. One group of heart disease patients, coronary artery disease (CAD) patients, are now presenting with increased clinical complexity and higher risk profiles due to increases in lifespan and comorbid disease states. Percutaneous coronary interventions (PCI) are the best treatment options for a subset of CAD patients, but patients are increasingly deemed ineligible due to their higher risk of procedural complications. A new option, protected PCI, makes PCI safer for those patients.&#13;
&#13;
Abiomed developed and manufactures the Impella pump, a temporary support option for the heart that provides the “protection” in protected PCI. Our work aims to ensure that the protected PCI option is available to these patients. This work supports the development of patient screener tools that identify patients with high-risk CAD who have not been offered PCI but should be eligible for protected PCI. &#13;
&#13;
Specifically, we tackle one of the eligibility requirements by extracting coronary lesion location and severity information from clinical records. Natural language processing (NLP) tools are enabling more advanced electronic health record (EHR) based patient research. We collected and curated a dataset of 72 diagnostic coronary angiogram reports from health systems which contributed data to the Abiomed cVAD registry. Of these, 39 reports from 6 sites were used as a training set and 13 reports from the same 6 sites as a development set for a data processing pipeline to extract coronary lesion information. This work expands on the existing solutions for extracting ejection fraction information from echocardiogram reports. The ejection fraction extraction task has been solved with regular expressions, a simple and somewhat inflexible pattern-matching approach. &#13;
&#13;
Our coronary lesion extraction followed a two-step general architectural approach that is common in NLP (Named Entity Recognition NER followed by Relation Extraction REL). We compare a machine learning based NER approach and a dictionary and regular expression ("matching") NER approach. Our REL implementation is rules-based. On entities alone, an intermediate outcome of the initial stage (NER), we achieve 92.1% recall and 93.9% precision with the machine learning based model and 95.1% recall with 52.6% precision for the matching-based model (on 370 total entities of types: location, vessel, and severity in the development set). The machine learning (ML) approach overcomes the inability for matching to be precise. This difference may not affect the final prediction performance depending on the second stage implementation. We achieve 89.7% recall and 84.5% precision on the second stage independently. (This is a conservative representation, as 7 of 103 relations in the development set are from types of sentences that are explicitly not yet handled by our second stage model. Recall increases to 92.6% and precision to 90.6% when those types of cases are ignored).&#13;
&#13;
Each stage independently achieves reasonable performance. We analyze errors to recommend the next steps of development for both stages. With the two stages together, we achieve 79.6% recall and 71.8% precision with the ML-based NER model and 76.2% recall and 77.7% precision 76.9% for the matching-based NER model (without correcting for expected future improvements in performance). Non-ML approaches can solve at least three-quarters of this text extraction problem. We recommend advanced methods, including grammatical dependency rules for relations and improving ML-based entity prediction with more training examples from specific contexts.&#13;
&#13;
This work provided a roadmap and the first pipeline to leverage data from the cVAD registry for algorithm development for patient screening applications. We developed structured data models and an annotated dataset for coronary lesion description extraction from coronary angiograms. We present the results of the entire algorithm and its component parts and propose advanced methods to refine the approach for implementation in future patient screening tools.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Twin-Driven Supply Chain Enhancement to Support Direct-to-Consumer Growth</title>
<link href="https://hdl.handle.net/1721.1/155969" rel="alternate"/>
<author>
<name>Agrawal, Siddhant</name>
</author>
<id>https://hdl.handle.net/1721.1/155969</id>
<updated>2024-08-13T03:33:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Digital Twin-Driven Supply Chain Enhancement to Support Direct-to-Consumer Growth
Agrawal, Siddhant
In response to the rising trend of Direct-to-Consumer (D2C) sales, many traditional retailers, which have historically relied on wholesale business models, are now undertaking significant supply chain transformations. This thesis explores the strategic shift of a large retailer in the footwear and apparel sector, pseudonymously referred to as Iota in this study, as it transitions towards a D2C-focused supply chain. This transition, emblematic of a broader industry transformation, is aimed at enhancing alignment with the evolving expectations of customers in terms of service, cost-effectiveness, and sustainability.&#13;
&#13;
Central to this research are the proposed enhancements by Iota’s leadership to decentralize Iota’s supply chain. These enhancements include adding both physical infrastructure, with the planned establishment of a cross-dock facility, and digital infrastructure, through the development of a decision engine that aids in efficiently routing products within the new decentralized supply chain network. The cross-dock facility is envisioned to provide an opportunity for decision postponement in the inventory flow from Asian factories to US distribution centers. Meanwhile, the decision engine, leveraging a heuristic-based algorithm, is set to unlock new inventory flows and enhance inventory distribution.&#13;
&#13;
With the new infrastructure to decentralize the supply chain yet to be fully operational, a retrospective study was conducted using a digital twin of Iota’s supply chain. Various push and pull-based inventory deployment strategies were simulated in the digital twin with the goal of alleviating pressure on the primary distribution center and increasing fulfillment from regional distribution centers. In the simulation process, challenges with forecast data and lumpiness of supply are discovered and subsequently addressed through the use of synthetic datasets, which emulate improved forecast coverage and smooth supply.&#13;
&#13;
The key findings from simulations highlight that despite achieving a modest performance in meeting the goals for the decentralized network, valuable insights were obtained that could drive future supply chain enhancements. The research underscores the benefits of smoothing supply for network performance, the critical role of comprehensive and reliable forecast data, and the necessity for supplementary storage solutions to complement the cross-dock facility. For example, one pull-based scenario using a synthetic dataset to emulate enhanced forecast coverage and smoother supply tripled network performance while reducing network costs by 1% compared to the baseline pull-based scenario. Such cost savings could be substantial for a large- scale retailer.&#13;
&#13;
Concluding with recommendations, the thesis advises Iota to re-evaluate purchasing practices, consider integrating multiple internal sources of forecast data into a single source, and continue with simulation analyses. These recommendations are designed to support Iota, and by extension, similar retailers, in their transition towards a robust and agile D2C supply chain, ensuring competitive advantage in the dynamic retail sector.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Diagnostic and Prescriptive Conformal Prediction Framework: Applied to Sleep Disorders</title>
<link href="https://hdl.handle.net/1721.1/155919" rel="alternate"/>
<author>
<name>Khalif, Faduma</name>
</author>
<id>https://hdl.handle.net/1721.1/155919</id>
<updated>2024-08-02T03:01:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Diagnostic and Prescriptive Conformal Prediction Framework: Applied to Sleep Disorders
Khalif, Faduma
We propose a novel predictive framework for the future diagnoses and treatments of patients with neurological conditions, specifically patients with sleep disorders, given their clinical history. Via the use of a conformal algorithm with a classifier as its base model, we are able to utilize a patients history of diagnoses, pharmacy dispensing, and other features to produce a set of possible final sleep disorder diagnoses and/or treatments with a definitive level of confidence and bounded level of uncertainty. We also utilize selective classification in order to allow the model to abstain from generating a prediction in cases where the algorithm’s predictive confidence does not meet a given confidence threshold, and we further investigate variables that correlate with “abstain” model outcomes. In addition, we experiment with the use of additional machine learning methods such as no-regret learning to better address issues that arise in clinical decision-making. We find that even in cases where there is a limited level of accuracy produced by our base classifier, we are able to use minimal data and selective prediction to establish highly accurate predictive outcomes for certain subsets of our cohort. In developing and testing this framework, we attempt to propose a new standard for predictive algorithms that target clinical-use cases and to better understand uncertainty quantification in a multitude of dimensions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Developmental Trajectories of Loophole Behavior in Autistic and Neurotypical Children</title>
<link href="https://hdl.handle.net/1721.1/155918" rel="alternate"/>
<author>
<name>Broski, Annalisa</name>
</author>
<id>https://hdl.handle.net/1721.1/155918</id>
<updated>2024-08-02T03:19:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Comparing Developmental Trajectories of Loophole Behavior in Autistic and Neurotypical Children
Broski, Annalisa
Loophole behavior is a common strategy used by neurotypical children to avoid trouble. The use of loopholes requires pattern recognition, language understanding, rational planning, and goal alignment. A major marker of autism is difficulty with Theory of Mind and language tasks, making their engagement with loophole behavior, which has clear patterns in neurotypical development, particularly interesting. We surveyed parents of autistic children (N = 202) and neurotypical children (N = 431) about their children’s engagement with loophole behavior. We found that loophole behavior is common in both populations, and while the onset of this behavior was significantly later among autistic children compared to neurotypical children, the peak and offset age were not. This could point to a developmental trajectory that occurs later for autistic children compared to neurotypical children, but overall demonstrates that autistic individuals have the ability to engage with loophole behavior.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Markerless Motion Capture and Principal Component Analysis to Classify BMX Freestyle Tricks</title>
<link href="https://hdl.handle.net/1721.1/155914" rel="alternate"/>
<author>
<name>Nates, Eva</name>
</author>
<id>https://hdl.handle.net/1721.1/155914</id>
<updated>2024-08-02T03:59:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Markerless Motion Capture and Principal Component Analysis to Classify BMX Freestyle Tricks
Nates, Eva
This thesis presents a novel Bicycle Motocross (BMX) Freestyle (FS) trick classification technique developed for the Australian Cycling Team. The first step is tracking six key points on the athlete and their bike using DeepLabCut, an opensource markerless motion capture software. Next, a Principal Component Analysis (PCA) is applied to the tracking data to calculate metrics to identify each trick type. Finally, a classifier is trained to learn these metrics. The dataset used in this paper focused on three common BMX Freestyle tricks: 360, backflip, and flair. The Logistic Regression model achieved the highest accuracy among the classifiers, correctly predicting the trick for 94.2% of the instances. This thesis discusses other ways to apply this data, such as novel trick generation. It also examines the robustness and cost benefit trade off of the classifier.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Data Augmentation with Attention Masks for Context Aware Transformations</title>
<link href="https://hdl.handle.net/1721.1/155913" rel="alternate"/>
<author>
<name>Marquez, Sofia M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155913</id>
<updated>2024-08-02T03:28:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Evaluating Data Augmentation with Attention Masks for Context Aware Transformations
Marquez, Sofia M.
Transfer learning from large, pre-trained models and data augmentation are arguably the two most widespread solutions to the problem of data scarcity. However, both methods suffer from limitations that prevent more optimal solutions to natural language processing tasks. We consider that transfer learning benefits from fine-tuning on increased target dataset size, and that data augmentation benefits from applying transformations in a selective, rather than random, manner. Thus, this work evaluates a new augmentation paradigm that uses the attention masks of pre-trained transformers to more effectively apply text transformations in high-importance locations, creating augmentations which can be used for further finetuning. Our comprehensive analysis points to limited success of utilizing this context-aware augmentation method. By shedding light on its strengths and limitations, we offer insights that can guide the selection of optimal augmentation techniques for a variey of models, and lay groundwork for further research in the pursuit of effective solutions for natural language processing tasks under data constraints.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the impact of automaker strategies on lithium price elasticity using a novel bottom-up demand model</title>
<link href="https://hdl.handle.net/1721.1/155912" rel="alternate"/>
<author>
<name>Sullivan, Luke Robert</name>
</author>
<id>https://hdl.handle.net/1721.1/155912</id>
<updated>2024-08-02T03:40:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of the impact of automaker strategies on lithium price elasticity using a novel bottom-up demand model
Sullivan, Luke Robert
A global transportation paradigm shift towards electrification is underway that is rapidly redefining how billions travel. To reduce possible disruptions to the electric vehicle transition, an understanding of the demand and supply of key critical materials, materials with a high risk of supply chain disruption, is essential. There are key gaps in understanding how automaker electrification strategies will influence materials demand over time. This presents materials suppliers with risks when making decisions on new mine openings, a process that can take many years before new ore is extracted. As a result, materials prices experience significant volatility such as with lithium, which has seen 6x price swings in the past five years. Informed by semi-structured interviews with major automakers, this research applies technical insights of current and emerging battery chemistries to bottom-up economic demand modelling to generate forecasts of lithium demand and price elasticity of that demand. Detailed analysis on automaker electrification strategies, regional breakdown, vehicle class composition, and selected battery chemistries creates an industry-wide evaluation of the possible short- and long-run impacts of high lithium prices. This research provides insights for decisionmakers in industry and government to optimize electrification strategies that minimize vulnerability to lithium price disruptions. I present three recommendations to automakers, suppliers, and policymakers: (1) accelerate investment in new battery technology, (2) adopt aggressive and flexible rollout strategies that offer wide options for range and drivetrain, and (3) improve strategic communication between suppliers and automakers to narrow forecasts on supply and demand.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of Multi-Objective Genetic Optimization in PCB Component Placement</title>
<link href="https://hdl.handle.net/1721.1/155911" rel="alternate"/>
<author>
<name>Ngô, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/155911</id>
<updated>2024-08-02T03:57:52Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Application of Multi-Objective Genetic Optimization in PCB Component Placement
Ngô, Thomas
Designing a printed circuit board (PCB) is a complex process that involves creating a schematic, placing components, ensuring that every component is routable, and performing simulations to predict the behavior of the PCB before it is manufactured. With the rise of technological innovations, the demand for chips will increase, putting pressure on the electronic design automation (EDA) industry to innovate in PCB design. As part of Cadence’s Allegro X AI team, which aims to develop AI technology to automate PCB designers’ tasks, we explored the application of multi-objective genetic optimization in component placements as an alternative method for automating component placement. More specifically, we applied genetic optimization to a two-sided printed circuit board (PCB). We discovered that employing multiple objectives, such as half-perimeter wirelength and routability, produces promising component placements.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Multiple Objective Optimization for Autonomous Sailing Vessels</title>
<link href="https://hdl.handle.net/1721.1/155909" rel="alternate"/>
<author>
<name>Webb, Jason B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155909</id>
<updated>2024-08-02T03:31:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Using Multiple Objective Optimization for Autonomous Sailing Vessels
Webb, Jason B.
This research addresses using multiple object optimization, via the established opensource Mission Oriented Operating Suite-Interval Programming (MOOS-IvP) platform, to meet the unique navigational demands and operational constraints of autonomous sailing vessels. Recognizing a gap in the existing IvP Helm framework’s ability to accommodate the intricate dynamics of wind-powered navigation, this thesis initiates with the development of a sailing behavior. The core contribution of this work is the novel introduction of a sine wave-based approach for defining upwind tacking maneuvers. Building from a foundation in mathematical analysis, an algorithm was developed that employs the sine function to model the vessel’s tack plan. Furthermore, the thesis explores the integration of this behavior within the MOOS-IvP architecture, detailing the modifications necessary to support wind-powered navigation. Evaluation of the proposed navigation behavior encompasses simulated environments. The assessments highlight the algorithm’s adaptability to changing wind conditions. Through a combination of theoretical development and simulation, this study not only demonstrates the viability of integrating traditional sailing methods with contemporary autonomous systems but also contributes to advancing the capabilities of the standard MOOS-IvP tool kit and its continued use in various maritime applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Capacitor Ladder Circuitry for Improving Electrical Energy Transfer Efficiency to Electromechanical Actuators</title>
<link href="https://hdl.handle.net/1721.1/155908" rel="alternate"/>
<author>
<name>Murphy, Trevor</name>
</author>
<id>https://hdl.handle.net/1721.1/155908</id>
<updated>2024-08-02T03:27:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Capacitor Ladder Circuitry for Improving Electrical Energy Transfer Efficiency to Electromechanical Actuators
Murphy, Trevor
This paper focuses on the general process of taking electrical energy from a power source to a mechanical system to do mechanical work, specifically focusing on a ladder circuit to deliver energy to a capacitive-type actuator that is modeled as a mass-spring-damper (MSD) system. The whole chain of power conversion involves an electrical transfer efficiency and a mechanical energy conversion efficiency. The first chapter walks through the motivation and failure to build a dielectric elastomeric as a MSD test system. The second chapter details the electrical problem defining what a capacitive-type actuator is, what the electromechanical actuation process entails from an abstract perspective, and an efficiency metric. The third chapter reviews how inductors offer a solution and how at certain size scales they lose efficacy. The fourth chapter introduces the ladder circuit as a solution to the electrical energy transfer. Chapters 5 and 6 detail electrical experiments and modeling of the circuit to characterize the efficiency of the electromechanical process. Lastly, chapter 7 concludes with discussions of applicability of the ladder circuit solution.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Picture Book for the Roboticist— Why we Should Start with Hardware, and How to Teach so it Sticks</title>
<link href="https://hdl.handle.net/1721.1/155907" rel="alternate"/>
<author>
<name>Mehrotra, Aditya (Adi)</name>
</author>
<id>https://hdl.handle.net/1721.1/155907</id>
<updated>2024-08-02T03:55:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Picture Book for the Roboticist— Why we Should Start with Hardware, and How to Teach so it Sticks
Mehrotra, Aditya (Adi)
This thesis explores why and how to teach hardware design in relation to building intelligent systems. We focus on the concepts of modeling, embedded systems, and actuation, and develop a series of hands-on exercises to teach specific concepts based on previous work. We identify and explain the concept of the translation layer, which we define as the interface between high-level controls and the hardware system. We explain the importance of hardware engineering to its operation and explore the role of the hardware engineer in building this layer. We use these ideas to build an undergraduate curriculum in robotics, the syllabi of four core classes, and hands-on exercises for their associated lab components.   Along the way we focus on the science of learning that often doesn’t make its way into engineering education. We present a summary of key concepts surrounding how our students learn and use this to explain why hardware engineering is a good medium for teaching. We use this to build a loose design paradigm for what ‘works’ in engineering teaching. And we use that design paradigm to build the aforementioned hands-on exercises.  Additional discussions include topics that should be considered when building a curriculum including providing space for low-stakes curiosity, teaching our students about the application of their work to global problems, and including narratives on learning in our teaching.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermal Interaction of Inert Additives in Energetic Materials</title>
<link href="https://hdl.handle.net/1721.1/155906" rel="alternate"/>
<author>
<name>Tsai, Gwendolyn</name>
</author>
<id>https://hdl.handle.net/1721.1/155906</id>
<updated>2024-08-02T04:02:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Thermal Interaction of Inert Additives in Energetic Materials
Tsai, Gwendolyn
Energetic materials are used for a variety of applications, including airbag deployment and solid rocket fuels, that require high energy density and various energy release rates. The energy release rate, determined by how fast the material burns, is often thought to be proportional to the bulk thermal diffusivity of the material. However, the inclusion of insulating inert particles in energetic materials has shown burning rate enhancement in certain cases. Flame front corrugation that increases the reaction front area observed at micron to sub-millimeter scales was proposed previously to explain the phenomenon. However, a recent simulation study observed a significant temperature gradient within the inert particle, implying that the residence time of the inert particle in the flame front could play a role in the thermal interaction between additives and surrounding energetic materials. In this work, we tested these hypotheses by employing a high-speed microscopic imaging system to quantify the burning rate and flame morphology of Al/CuO nanothermites with various SiO2 particle sizes and mass loading. Additionally, we performed flame propagation simulations to quantify the thermal interactions between the energetic materials and the embedded single inert particle. The experimental results show that the burning rate depends on the particle size as well as mass loading. Specifically, as the SiO2 particle size increases from 100 nm to 100 μm, the burning rate is enhanced by 26% at a mass loading of 7.5%. Further computational studies reveal that flame corrugation may not be the sole factor to alter the burning rate. Non-dimensional analyses show that energy absorption and temperature non-uniformity in inert particles have strong correlations with particle diameter. When the characteristic time of heating the inert particle is shorter than the flame residence time, the inert particle acts as a heat sink, leading to a negative impact on burning rates due to the heat removal from the surrounding energetic materials. Experimental studies reveal that additive particle size has an impact on the nanothermite burn rate. Insight into why this may occur is provided by computational studies of a single particle inclusion, as well as images captured of the burn rate experiments, showing the flame front morphology and particle size effects on heat transfer may play a key role in burn rate alteration by inert additives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovative Floating Wind Turbine with Synthetic Mooring System and Feasibility Analysis of a Solar-Wind-Battery Hybrid System</title>
<link href="https://hdl.handle.net/1721.1/155904" rel="alternate"/>
<author>
<name>Gkiokas, Christos</name>
</author>
<id>https://hdl.handle.net/1721.1/155904</id>
<updated>2024-08-02T03:58:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Innovative Floating Wind Turbine with Synthetic Mooring System and Feasibility Analysis of a Solar-Wind-Battery Hybrid System
Gkiokas, Christos
Synthetic mooring lines, characterized by their neutral buoyancy and high strength, are crucial for maintaining the station-keeping of Floating Wind Turbines (FWTs) by providing the necessary restoring forces while minimizing the vertical loads on the platform. This thesis explores the evolution of mooring systems from traditional catenary chains to taut synthetic fiber ropes, using the VolturnUS-S semi-submersible platform as a case study. The investigation delves into the viscoelastic properties of synthetic ropes and the challenges in accurately modeling their stiffness characteristics. Detailed analysis of the mooring system for the VolturnUS-S platform includes configuration, inclination, and composition of the mooring lines. Environmental conditions at the prospective mooring site are analyzed to evaluate the platform’s responses. A mesh sensitivity study determines the optimal balance between computational efficiency and accuracy. Various stiffness models of polyester mooring ropes are compared, highlighting the impact of rope diameter and inclination on mooring system performance, examining pretension, static and dynamic tensions, and safety margins. The major conclusions of this study are discussed, emphasizing the key findings. Acomprehensive feasibility analysis and preliminary economic assessment of a solar-windbattery hybrid system designed to supply power to a remote island is presented. Multiple configurations are evaluated to identify the most cost-effective and efficient system. The findings indicate that a hybrid system is both technically viable and economically feasible, with wind energy contributing significantly during winter months and solar energy during summer, yielding a reliable power supply throughout the year. Additionally, an overview of offshore wind submarine cabling is provided, focusing on types of cables, route planning, installation, operational considerations, and environmental impacts. Comprehensive planning for cable routes is covered, including site assessments, hydrographic surveys, and regulatory requirements.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolving User Needs Identification through AI Augmented Approaches</title>
<link href="https://hdl.handle.net/1721.1/155903" rel="alternate"/>
<author>
<name>Schelhaas, Booker B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155903</id>
<updated>2024-08-02T03:39:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evolving User Needs Identification through AI Augmented Approaches
Schelhaas, Booker B.
In a human-centered design approach to the product design cycle, conducting a user needs analysis is critical to the long-term success of the project. Designers routinely are tasked with engaging in stakeholder studies in an effort to identify their needs that then drive the design. Sometimes users are aware of their needs, but often they are not conscious of some important yet hidden needs, called latent needs, which are particularly difficult to identify. The identification process can be laborious and resource intensive, including interviews and in-depth observations by experts to extract workarounds and pain points that suggest the highest potential for product success. This thesis aims to first explore the current status quo for user need extraction through observation and interviews, and then presents a preliminary and novel AI based method for augmenting designers’ abilities to aid in the process.  The first chapter presented demonstrates what can be done with traditional methods. We conducted videos and observations of older adults to understand their ability to stand and opinions on devices to aid them. After conducting many interviews and observations, we identified that the use of stand assist devices is in itself a latent need, as there exists a perception gap in the older adults between their perceived ability to stand and their actual ability to stand, as diagnosed by a trained physical therapist.  The following chapter in response to some of the difficulties observed in the first study presents a novel AI tool to augment designers’ abilities to identify user needs from observational videos. Our tool utilizes pose estimation to calculate ergonomic risk of users as they engage in a task, as well as object segmentation to identify objects that could be affecting the user’s behavior. 2 These are then compiled into a computer interface for designers to use when watching an observational video of a user. Methods, experimental design, and future work are discussed for the study which is pending to be completed.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-throughput Bandgap Mapping for Perovskite-inspired Materials</title>
<link href="https://hdl.handle.net/1721.1/155902" rel="alternate"/>
<author>
<name>Sheng, Fang</name>
</author>
<id>https://hdl.handle.net/1721.1/155902</id>
<updated>2024-08-02T03:27:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">High-throughput Bandgap Mapping for Perovskite-inspired Materials
Sheng, Fang
In recent years, lead halide perovskites have gained attention as promising candidates for photovoltaic devices due to their superior performance. However, issues with stability and toxicity have hindered their widespread application. As a result, perovskite-inspired materials, which are stable and lead-free, have attracted attention. Like their lead halide counterparts, these perovskite-inspired materials possess a vast compositional space, presenting a challenge in finding materials with the desired optoelectronic properties. To address this challenge, there is recent interest to develop high-throughput materials synthesis techniques capable of exploring large materials spaces, culminating in the development of materials-printing platforms capable of synthesizing dozens of candidate materials per minute. However, despite acceleration of sample synthesis, significant delays are faced during sample characterization, due to time-intensive data acquisition and analysis. Additionally, issues concerning poor or unquantified synthesis reproducibility can affect the quality of information gained. In this thesis, I used a home-built high-throughput combinatorial printer to synthesize perovskite-inspired materials and developed a novel high-throughput technique to map local bandgaps with pixel-level resolution. This characterization technique utilizes spatially-resolved reflectance spectra and automated data analysis. I collected approximately a million optical bandgap measurements from the compositional space of Cs₃(BiₓSb₁₋ₓ)₂(Br subscript y I₁₋ subscript y )₉ perovskite-inspired materials in total. The bandgap mapping results revealed nonlinear bandgap variations along six compositional gradient sequences. I was able to identify phase separation within samples by detecting the presence of multiple bandgaps, utilizing extensive spatial and optical data. Finally, I worked with colleagues to obtain transient absorption spectroscopy data, which indicated that carrier depletion from ground states to excited states occurred at distinct energy levels, exhibiting unique carrier dynamics that correspond with observed bandgap variations. Anomalies within quasi-binary systems may indicate phase separation In conclusion, this approach enables rapid screening of quasi-binary phase spaces on the basis of bandgap.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stability and Dynamics of Resource Consumer Ecosystems</title>
<link href="https://hdl.handle.net/1721.1/155901" rel="alternate"/>
<author>
<name>Liu, Yizhou</name>
</author>
<id>https://hdl.handle.net/1721.1/155901</id>
<updated>2024-08-02T03:28:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Stability and Dynamics of Resource Consumer Ecosystems
Liu, Yizhou
Natural ecosystems, ranging from microbiomes to forests, significantly influence humanity by affecting individual health and promoting the sustainable growth of whole society. Understanding the collective properties like diversity, stability, and diversity-stability relationships of a large complex ecosystem having real-world structures has been a significant challenge, yet is important for better ecosystem management. In this thesis, we investigate stability and dynamics of resource-consumer ecosystems (ecosystems with two trophic levels). Beginning with geometric analyses of small systems, we uncovered a critical instability arising from a mismatch between resources that promote growth (defined as niches) and those predominantly consumed. This instability emerges when the discrepancy between consumption and growth exceeds the differences among nearest niches, indicating that species are more likely to encroach upon the niches of others rather than their own. After losing stability, the extent to which species encroach upon their neighbors’ niches can predict the diversity and sizes of attractor basins. We further develop a stability criterion with statistical properties of consumption and growth, employing random matrix theory. This criterion hinges on the correlation between growth-promoting resources and those primarily consumed, with the critical level of discrepancy being influenced by the ratio of species to resources. This result is consistent with the geometric interpretation, giving an analytic estimation of maximum niche overlaps. Additionally, we uncover fundamental symmetries in system stability, enhancing our stability criterion through geometric insights and extending its applicability to realistic situations. Later, by integrating mechanisms such as cross-feeding, toxin production, species autoregulation, etc., our expanded model framework accommodates scenarios where consumers outnumber resources, thereby refining our stability criterion. Notably, we identified a re-entrant stability phenomenon, where increased diversity within trophic levels initially destabilizes but subsequently stabilizes the community. This leads to the conclusion that the difference in diversity between trophic levels is crucial for ecosystem stability, with the least stable ecosystems being those with comparable numbers of species across levels. Our work establishes a mechanistic understanding of ecosystem instability through niche encroachment and shows that stability hinges on diversity differences across trophic levels rather than total diversity, therefore emphasizing the significance of mechanistic structures in predicting large ecosystem behaviors.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanical Design and Learned Control System Development of Fiber Extrusion Device on Industrial Programmable Logic Controller (PLC) Platform.</title>
<link href="https://hdl.handle.net/1721.1/155900" rel="alternate"/>
<author>
<name>Sakib, Gazi S.</name>
</author>
<id>https://hdl.handle.net/1721.1/155900</id>
<updated>2024-08-02T03:33:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Mechanical Design and Learned Control System Development of Fiber Extrusion Device on Industrial Programmable Logic Controller (PLC) Platform.
Sakib, Gazi S.
Optical fibers are ubiquitous in the 21st century as they form the backbone of the internet and electronic communication and enable a global village to exist. Optical fibers play a pivotal role in modern technology and communication for several reasons. They enable high speed data transmission over large distances, while minimizing the data interception. In addition, they are also used in fields like medicine (fiber-optic imaging and endoscopy), sensing technologies (used in temperature, pressure, and strain sensors), and industrial settings (for data transmission and control systems). Therefore, it is of utmost importance that the manufacturing process of optical fibers is better controlled by developing advanced control algorithms that enhance the state-of-the-art PID (Proportional–Integral–Derivative) controllers. This thesis attempts to showcase the work done to establish a framework and a “digital twin” for deploying advanced learned control algorithms on industrial platforms such as Programmable Logic Controllers (PLC) based on machine learning models such as DDPG (Deep Deterministic Policy Gradient). To develop and train such control algorithms, a desktop version of a fiber draw tower was designed, manufactured, and controlled via a PLC. System dynamics data was collected using a readily available preform substitute and the manufactured desktop Fiber Extrusion Device (FrED) was used to train the DDPG-based control algorithms/model. The model was then subsequently tested and compared to state-of-the-art PID algorithms. To that effect, this thesis establishes a framework and enables the path to further develop advanced control algorithms to better control the manufacturing process of fiber optics. This pivotal step promises to significantly enhance the precision and efficacy of optical fiber manufacturing processes, amplifying their impact across industries and technological frontiers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Directional Recrystallization of an Additively Manufactured Oxide Dispersion-Strengthened Nickel-Base Superalloy</title>
<link href="https://hdl.handle.net/1721.1/155899" rel="alternate"/>
<author>
<name>Carter, Christopher P.</name>
</author>
<id>https://hdl.handle.net/1721.1/155899</id>
<updated>2024-08-02T03:46:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Directional Recrystallization of an Additively Manufactured Oxide Dispersion-Strengthened Nickel-Base Superalloy
Carter, Christopher P.
This thesis investigates the recrystallization behaviors of an additively manufactured (AM) oxide dispersion-strengthened (ODS) NiCoCr medium entropy alloy, focusing specifically on the effects of a liquid phase, annealing twins, and aluminum microalloying additions on recrystallization kinetics. Conventional wrought ODS alloys achieve coarse columnar grains through directional recrystallization (DRX) heat treatments. The main goal of this study was to assess whether directional recrystallization can achieve a similar effect in AM ODS alloys. Gas-atomized NiCoCr powders were decorated with oxide dispersoids using resonant acoustic mixing then consolidated with laser powder bed fusion. The as-printed ODS materials were fully dense with retained nanoscale Y2O3 dispersoids and a small grain size of order 10 μm. The as-printed materials were subjected to isothermal recrystallization and directional recrystallization heat treatments at soak temperatures between 800 and 1419 °C. During isothermal annealing the material only recrystallized when the soak temperature exceeded the solidus or Al alloying additions accelerated the coarsening kinetics of the oxide dispersoids. Directional recrystallization experiments on the non-ODS alloy did not result in the formation of columnar grains likely due to the propensity of recrystallized NiCoCr to form annealing twins which are less mobile than grain boundaries. Directional recrystallization in the ODS NiCoCr could not be achieved without surpassing the solidus temperature of the alloy.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact of Litigation Financing Disclosures on Patent Litigation</title>
<link href="https://hdl.handle.net/1721.1/155897" rel="alternate"/>
<author>
<name>Han, Yuxin (Zoe)</name>
</author>
<id>https://hdl.handle.net/1721.1/155897</id>
<updated>2024-08-02T03:10:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Impact of Litigation Financing Disclosures on Patent Litigation
Han, Yuxin (Zoe)
This paper investigates the impact of mandatory litigation financing disclosures on litigation outcomes, particularly in patent litigation. Despite the increasing importance of litigation funding, transparency regarding funders’ involvement remains limited. Using a differences-in-differences model, the study examines the effects of recent disclosure mandates implemented in federal courts. The findings unveil a notable reduction in the volume of cases instigated by Non-Practicing Entities (NPEs) following the mandate, alongside indications of strategic forum shopping aimed at circumventing disclosure requirements. Furthermore, the study finds reductions in settlement time for cases filed by likely financially constrained plaintiffs after the introduction of mandatory funding disclosures. In summary, this paper illuminates the complex relationship between disclosure regulations and NPE activities, highlighting the potential unintended consequences arising from seemingly well-intentioned reforms within the legal system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Baby Gym: Bridging the Gap between Reinforcement Learning and Human Infant Locomotor Development</title>
<link href="https://hdl.handle.net/1721.1/155896" rel="alternate"/>
<author>
<name>Patel, Nikasha G.</name>
</author>
<id>https://hdl.handle.net/1721.1/155896</id>
<updated>2024-08-02T03:05:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Baby Gym: Bridging the Gap between Reinforcement Learning and Human Infant Locomotor Development
Patel, Nikasha G.
Learning how to move is one of the most fundamental milestones humans achieve during their development, through complex interactions between neural control, biomechanics, and the environment. However, not every human learns to locomote the same way: babies exhibit remarkable variance in the stages they undergo before crawling and walking. While there exist years of empirical research quantifying and qualifying developmental stages in infant locomotion, we lack a computational model to understand how variations during the developmental stages affect overall crawling and walking behavior, thereby allowing us to test hypotheses in simulation. In order to better understand how infants learn to move, a testable model of infant locomotion would complement experimental studies allowing for model-guided interpretations of observed phenomena. This thesis work fulfills the gap in research by introducing Baby Gym, a library for probing emerged behavior through reinforcement learning (RL) on an infant-like agent with the capacity to crawl and walk, compatible with both the OpenAI Gymnasium and DM Control APIs. Baby Gym will serve as a first step in enabling a cross-disciplinary open-source ecosystem of computational models to understand infant motor development.&#13;
&#13;
The work consists of the following: an extensive literature review that justifies the foundations for a baby RL environment; a Python-based infrastructure for cross-compatibility between Gymnasium and DM Control; a reproducible RL environment with several new reward functions that yield human-like locomotor development stages; and initial methods for evaluating the "human-likeness" of the emerged locomotion.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Human Memory Processes via Bio-Signals</title>
<link href="https://hdl.handle.net/1721.1/155895" rel="alternate"/>
<author>
<name>Abdelrahman, Mona Magdy</name>
</author>
<id>https://hdl.handle.net/1721.1/155895</id>
<updated>2024-08-02T03:40:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Detecting Human Memory Processes via Bio-Signals
Abdelrahman, Mona Magdy
Bio signals, such as eye movement data, photoplethysmography (PPG), and electrodermal activity (EDA), can provide insight into various cognitive states. Previous work has shown that eye movements along with other bio-signals differ when viewing familiar versus unfamiliar faces. Signals such as heart rate (derived from PPG) and skin conductance (derived from EDA) have also been previously evaluated to have correlations with different states of memory. In this study, we collected simultaneous pupillary, PPG, and EDA signals while participants (n=32) transitioned between several cognitive states (learning, recognition, and recall). Using this data, we propose multi-modal, machine learning methods to predict and evaluate whether a user is in a cognitive state of learning, recognition, or recall. We will discuss the differences observed in the data between these cognitive states, as well as next steps and applications for this model.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Sensors, Data Analysis, and Non-Intrusive Load Monitoring: Foundations for Reliability-Centered Maintenance on Ships</title>
<link href="https://hdl.handle.net/1721.1/155894" rel="alternate"/>
<author>
<name>Skimmons, Jacob Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/155894</id>
<updated>2024-08-02T03:40:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Distributed Sensors, Data Analysis, and Non-Intrusive Load Monitoring: Foundations for Reliability-Centered Maintenance on Ships
Skimmons, Jacob Daniel
Advances in computing and sensing technology have brought powerful new tools within reach of shipboard engineers. With the right setup, operators can leverage statistics and digital signal processing tools to gain physical insight previously obscured by the sheer amount of work and specialized knowledge it once took to do the same. This thesis explores several applications of non-intrusive load monitoring (NILM) tools aboard a U.S. Coast Guard Fast-Response Cutter (FRC) patrol boat, novel analysis methods of the corrosion protection systems on the FRC, and practical ways of making smart data approachable. Once implemented, these methods will reduce the effort needed to safely operate a modern, high-tech ship by giving operators greater insight into how their systems perform in real-time.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Molecular Self-Assembly of Carbon Nanosheets via AFM&#13;
Nanoprinting</title>
<link href="https://hdl.handle.net/1721.1/155893" rel="alternate"/>
<author>
<name>Ibrahim, Malek M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155893</id>
<updated>2024-08-02T03:52:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Molecular Self-Assembly of Carbon Nanosheets via AFM&#13;
Nanoprinting
Ibrahim, Malek M.
Traditional nanofabrication methods are currently enabled by top-down and more recently, bottom-up approaches. The former involves highly specialized equipment and processes, such as photolithography, electron beam lithography, and focused ion beam milling, to etch or deposit materials at the nanoscale. These methods are well-established and widely used in the semiconductor industry, but they often require expensive equipment, complex processes, and employ environmentally harmful chemicals. The latter approach, bottom-up nanofabrication, has recently gained popularity due to its potential for low-cost, highly customizable, and environmentally friendly fabrication of nanoscale structures, though many challenges still exist with developing a scalable manufacturing method. As such, a variety of techniques have been investigated to enable bottom-up nanofabrication, including 2 photon polymerization (2PP), electrohydrodynamic jet printing, dip-pen nanolithography, and solid-state polymerization among others. In this thesis, we propose a new bottom-up nanofabrication approach by combining molecular self-assembly with atomic force microscopy (AFM), which we believe has the potential to create devices with unprecedented properties and functionalities in both the technological and biological domains. To this end, we first present the development of a proof-of-concept custom AFM nanoprinter for the molecular self-assembly of carbon nanosheets, and subsequently, we explore the design, fabrication, and initial testing protocols of custom 2PP-printed FluidFM cantilevers as an alternative to traditional FluidFM probes for more general AFM nanoprinting applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, Development, and Testing of an Unmanned Surface Vessel (USV) for Oyster Aquaculture</title>
<link href="https://hdl.handle.net/1721.1/155892" rel="alternate"/>
<author>
<name>Dapoz, Annemarie</name>
</author>
<id>https://hdl.handle.net/1721.1/155892</id>
<updated>2024-08-02T03:22:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design, Development, and Testing of an Unmanned Surface Vessel (USV) for Oyster Aquaculture
Dapoz, Annemarie
To sustainably feed the growing worldwide population, development of aquaculture technology is necessary. However, it heavily lags behind that for terrestrial agriculture. The Oystermaran team at MIT Sea Grant is working on developing a vehicle to address a section of this need. Close quarters flip-bag oyster farming, common in Massachusetts, is a physically demanding job which is done entirely manually as there is no existing technology to fit into the crowded oyster field. The team developed the Oystermaran- an unmanned surface vessel designed specifically maneuver through the crowded farm and flip the baskets. This thesis covers the complete mechanical design, development, and initial testing of the 2nd Oystermaran vehicle. Built as a flexible design to allow adaptations and tuning on-site, the Oystermaran V2 featured interchangeable bows, adjustable frame and mechanism dimensions, and worked to add mechanisms and capabilities that aquafarmers requested. Multiple rounds of testing and adjustments were conducted and the Oystermaran V2 proved to be a complete platform the team can continue to test and develop to eventually make a fully autonomous vehicle.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Manufacturability Assessment of the Navy Integrated Power and Energy Corridor (NiPEC)</title>
<link href="https://hdl.handle.net/1721.1/155891" rel="alternate"/>
<author>
<name>Curran, Emily Alice</name>
</author>
<id>https://hdl.handle.net/1721.1/155891</id>
<updated>2024-08-02T04:03:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Manufacturability Assessment of the Navy Integrated Power and Energy Corridor (NiPEC)
Curran, Emily Alice
The growing electrical demands of sophisticated naval vessels necessitate the development of advanced power distribution methods. With the U.S. Navy’s shift towards fully electric ships, exemplified by the Zumwalt class destroyer and the forthcoming DDG(X), the demand for electrical power on future ships is projected to exceed 100 megawatts. To meet this challenge, the Massachusetts Institute of Technology (MIT) Sea Grant Program’s Design Laboratory, in collaboration with the Electric Ship Research and Development Consortium (ESRDC), is developing the Navy Integrated Power and Energy Corridor (NiPEC). This innovative system is designed to transform power management in all-electric warships through the use of modular units for energy management and power electronic building block (PEBB) technology. &#13;
Substantial groundwork has been established on the components and initial configurations of NiPEC. The collaborative team is working to develop not only a more robust power distribution system, but also an infrastructure that is simpler to construct, install, and maintain onboard. A next step of development focuses on evaluating the design’s manufacturability and the feasibility of manufacturing and installing the system aboard ships. This study explored the principles of Design for Manufacturability (DFM) and Design for Production (DFP) and then defined how these concepts apply to the Power Electronic Power Distribution Systems (PEPDS) and the NiPEC project. By leveraging the principles of DFM and DFP, this thesis proposes criteria for assessing the overall manufacturability of the NiPEC and its subsystems. By establishing criteria based on the principles of DFM as it pertains to NiPEC and naval applications, system designs may be objectively evaluated throughout the design phase. This thesis applies the proposed evaluation criteria to current NiPEC cooling system designs to illustrate the application of these criteria. This evaluation also highlights the trade-offs between manufacturability and other key metrics such as cost, reliability, and maintainability. These criteria may be useful in evaluating the design and functionality of systems and subsystems, steering design choices towards solutions that are not only technically sound, but also practical for manufacturing and installation. This approach ensures the alignment of the NiPEC system with the evolving needs of naval power management, and further enables its successful implementation on future all-electric warships. With this evaluation, this thesis begins to bridge the gap between the current state of research and the practical deployment of a next-generation shipboard power distribution system.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity Sensors for a High-Bandwidth, Low-Latency Robotic Manipulation Object Avoidance Controller</title>
<link href="https://hdl.handle.net/1721.1/155889" rel="alternate"/>
<author>
<name>Han, Jessica</name>
</author>
<id>https://hdl.handle.net/1721.1/155889</id>
<updated>2024-08-02T03:17:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Proximity Sensors for a High-Bandwidth, Low-Latency Robotic Manipulation Object Avoidance Controller
Han, Jessica
Robotics holds the promise of transforming industries, from automating recycling to managing household chores, by enabling machines to perform tasks with human-like dexterity. However, current robotic manipulation systems struggle to achieve the real-time responsiveness required for such tasks. Traditional systems rely on cameras, which slow down control loops with dense and difficult-to-process data. This thesis addresses the need for real-time control in robotic manipulation by utilizing proximity sensors in a high-bandwidth, low-latency object avoidance reflex controller on the Biomimetic Robotics Lab’s dexterous robotic manipulation platform. The research focuses on the two most viable proximity sensors for robotic manipulation: the STMicroelectronics VL6180X Time-of-Flight sensor and the Thinker Phase-Modulated-Light sensor. These sensors are characterized based on their measurement range, error, variance, field-of-view, and convergence time to determine their usability in an object avoidance reflex. Following characterization, a study on the integration of these sensors into the manipulation platform is performed to assess sensing latency and bandwidth implications. Finally, validation of the optimal sensor-controller configuration for the object avoidance reflex—averaging two time-of-flight sensors with a linear virtual force—shows an improvement in bandwidth from 33 Hz to 115 Hz, enhancing the reactivity and stability of the object avoidance reflex. Overall, this research provides a comprehensive study on the individual sensor and sensor-integration levels of proximity sensors for object avoidance reflexes. It enables future researchers to be confident in the manipulation platform’s performance for further controls-level research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Dynamic Manipulation on an Anthropomorphic Robotic Table Tennis Platform</title>
<link href="https://hdl.handle.net/1721.1/155886" rel="alternate"/>
<author>
<name>Cancio, Kendrick D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155886</id>
<updated>2024-08-02T03:36:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards Dynamic Manipulation on an Anthropomorphic Robotic Table Tennis Platform
Cancio, Kendrick D.
Specialized robots whose morphologies address narrow tasks are capable of super-human precision, speed, and accuracy. However, with generalized anthropomorphic designs coming to the forefront of robotics, we have yet to achieve parity with the best human performance on these platforms, particularly in dynamic manipulation which encompasses tasks such as throwing, catching, and striking. This thesis documents work towards an enabled robotic table tennis platform that will allow the development of planning and control algorithms for dynamic manipulation. Specifically, a fully integrated hardware platform is presented with two candidate vision systems and a 5 DOF anthropomorphic robotic arm. A dynamics model of the ball is introduced and validated for predicting the trajectory of the ball. To strike the ball, a nonlinear trajectory optimization problem is formulated for the arm and shown to be capable of generating various types of swings. This formulation is applied to the lower 4 DOF case by additionally considering the timing of the strike. Finally, nominal ball striking is demonstrated on hardware for the case of planar ball trajectories.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Neuronal Cell Classes and their Role in&#13;
Cognition</title>
<link href="https://hdl.handle.net/1721.1/155884" rel="alternate"/>
<author>
<name>Huang, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/155884</id>
<updated>2024-08-02T03:15:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Investigating Neuronal Cell Classes and their Role in&#13;
Cognition
Huang, Emily
Classifying neurons into different cell classes is both an idea that has existed since the origins of neuroscience, and one that is essential to understanding the complex interactions of the brain. While there has been a substantial effort to categorize neurons morphologically, molecularly and physiologically in in vitro studies, there is a gap in experiments performed on awake and behaving animals. Using data collected from macaque monkeys performing a working memory task, and employing an unsupervised Gaussian mixture model (GMM) clustering algorithm, a number of different cell classes and their defining features were distinguished in area 7A, the lateral intraparietal area (LIP), the dorsolateral and ventrolateral prefrontal cortex (PFC) and the extrastriate visual area (V4). While the number of cell classes found across areas differed, there were several classes across areas that appeared to be correlates. Classes in each area also showed functional differences in information encoding during predictable trials and distributional differences in depth. This signifies both the potential of functionally distinct cell classes involved in prediction, as well as the existence of universal cell classes across different areas.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aging Changes Cell Mechanics and Dynamics with a Backbone of Cytoplasmic Crowding</title>
<link href="https://hdl.handle.net/1721.1/155881" rel="alternate"/>
<author>
<name>Lee, Lani Dakyoung</name>
</author>
<id>https://hdl.handle.net/1721.1/155881</id>
<updated>2024-08-02T03:00:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Aging Changes Cell Mechanics and Dynamics with a Backbone of Cytoplasmic Crowding
Lee, Lani Dakyoung
Aging is a biological process that is correlated with life-altering and terminal diseases, and while the population of the elderly increases every year, the mechanism behind aging and its physical changes is not fully understood. Many aging studies traditionally focus on molecular-level changes, but much of the physical effect of aging could be better understood in cells, which are the fundamental building blocks of life. In particular, because the cytoplasm comprises approximately 70% of cells and its physical properties strongly influence biological processes, understanding its mechanics provides a more comprehensive description of aging. Using cells from well-established aging mice models, we investigate the morphological and dynamic changes of aging cells and how they relate to the physical state of the cytoplasm. Using particle fluctuation, optical tweezers, and force spectrum microscopy, we demonstrate that aging halves motion inside the cytoplasm due to doubled stiffness, while active forces remain statistically similar. In addition, we take tomograms of cell refractive index that indicate a denser cytoplasm and 3D images that display decreased cell volume, hinting that aging causes a more crowded interior in line with the physical differences we observe. We investigate some key functional differences at the cellular level; confocal images and videos show aged cells spreading larger and rounder, and their motion decreased. 3D morphology and ECM structure, as well as contractility measurements from traction force microscopy indicate a changed cell-environment interaction. We also measure a increased nucleus to cell volume ratio, an important marker in cell biology, that may indicate changes in cell maturity, or even the connection to cancer malignancy. Our results imply a crucial physical mechanism behind cellular-level changes due to aging, helping to reconcile the physical and nonphysical changes investigated in aging literature. This study provides an extensive investigation of the cytoplasm and connects its physical state to the changes in cell mechanics and dynamics observed from aging.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrodynamic Analysis of Arrays of General Bodies</title>
<link href="https://hdl.handle.net/1721.1/155880" rel="alternate"/>
<author>
<name>Cotey, Sarah M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155880</id>
<updated>2024-08-02T03:14:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Hydrodynamic Analysis of Arrays of General Bodies
Cotey, Sarah M.
Wave energy is one of the world’s largest sources of renewable energy. However, wave energy farms are in an early stage of development and relevant research in this field has not produced a general agreement on design approach. Many research articles have been published analyzing optimal geometry of a singular wave energy converter (WEC) or the arrangement of a narrow range of varied geometry. This thesis seeks to expand on this research to study the effects of both WEC arrangement and varied body geometry. An optimal combination of WEC geometry and array configuration to maximize energy absorption from scattered and radiated wave interactions between bodies can be determined using computational methods. In order to lay the groundwork to accomplish this, a partial wave decomposition model was developed to describe wave-body interaction of a body of general shape. Hydrodynamic behavior was modeled using potential flow and linear wave theories, in line with other research in this area. Bodies of varied shapes were modelled using computer aided design (CAD) software. The hydrodynamic response of the isolated body problem was subsequently analyzed using the WAMIT boundary element method (BEM) program. Resulting velocity potentials, excitation forces, and other hydrodynamic quantities were then processed using a partial wave mathematical model to determine each body’s unique diffraction and force transfer matrices. These characteristic quantities were then input into a multiple wave scattering interaction in-house program to analyze the system response in various configurations. The power gain of these arrays was studied to determine the magnitude of power absorption increase relative to the body in isolation. The results were analyzed to determine arrays and body geometry designs that produce improved system response and overall WEC efficiency.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Magnetohydrodynamic Induction Pump Jet Propuslor for Undersea Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155879" rel="alternate"/>
<author>
<name>Daus, Jonathan J.</name>
</author>
<id>https://hdl.handle.net/1721.1/155879</id>
<updated>2024-08-02T04:06:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Magnetohydrodynamic Induction Pump Jet Propuslor for Undersea Vehicles
Daus, Jonathan J.
There has been a strong interest in Magnetohydrodynamic (MHD) technology for use in marine propulsion for over 60 years, but progress been limited due to the inability to create strong enough magnetic fields (&gt; 5T). The Defense Advanced Research Projects Agency (DARPA), Defense Sciences Office (DSO), recently released a Broad Agency Announcement (BAA) for their Principles of Undersea Magnetohydrodynamic Pumps (PUMP) program, soliciting for the design and prototype development of an MHD propulsor for naval applications. This thesis investigates the design of an induction based MHD propulsor for use on submarines. The objective of this work was to optimize the propulsor design such that its electrical efficiency, ηE &gt; ηD, where ηD is DARPA’s goal efficiency of 70%, while achieving the required thrust for tactical speeds (∼ 10 kts). This research modeled the propulsor using Actuator Disk Theory (ADT) and incorporates inflow boundary layer effects and hydrodynamic drag to develop a total propulsor efficiency. There was a significant investigation into the mitigation of the finite length effect that has traditionally led to low efficiencies (5%- 45%) in MHD liquid metal induction pumps. Analysis showed that the design of the current waveform was essential to achieving relevant efficiencies, where the Nuttall Window was selected as the optimal waveform for this application. Additionally, this work concluded that the propulsor efficiency is largely dependent upon the selection of the current carrier wavenumber k0 and wavenumber slip ε, which is defined as ε = (ω/V) −k₀, where ω is the angular frequency of the A/C current, and V is the fluid velocity inside the shroud of the propulsor. Results showed that, ηE ≥ ηD was achievable for practical magnetic field strengths (≤ 20T).
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Distributed Simulation Cluster for MOOS-IvP</title>
<link href="https://hdl.handle.net/1721.1/155876" rel="alternate"/>
<author>
<name>Becker, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/155876</id>
<updated>2024-08-02T03:09:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development of a Distributed Simulation Cluster for MOOS-IvP
Becker, Kevin
Batch testing simulations of autonomy software make verification and optimization much easier and more robust. The ability to verify and optimize code is particularly important for large, expensive assets, such as marine vessels, where the cost of any failure is high. This thesis discusses the architecture, implementation, and use of an expandable simulation toolbox for MOOS-IvP. The toolbox utilizes Monte Carlo simulations since these are incredibly flexible for differing scenarios. A distributed architecture improves robustness since a single failure does not bring the cluster down. Additionally, personal computers may be added to the cluster during off hours, thus increasing the average computing power.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Exploration for Biological Fluid Sampling Platform</title>
<link href="https://hdl.handle.net/1721.1/155875" rel="alternate"/>
<author>
<name>Higginbotham, Haley O'Hara</name>
</author>
<id>https://hdl.handle.net/1721.1/155875</id>
<updated>2024-08-02T03:30:01Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design Exploration for Biological Fluid Sampling Platform
Higginbotham, Haley O'Hara
This work explores the design of an implantable, peristaltic pumping platform for chronic sampling of neuropeptides. The project drew inspiration from a pumping platform previously developed in the Cima Lab. Users have reported that the previous platform is difficult to use. The pump is also nearly 47x too large to be implanted in rats, which restricts its use to sedated or tethered animals. The project aimed to improve usability and enable implantation. Alternative fluidic junction methods and alternative actuation modes were investigated. The new junction design improves usability by enabling repeated, reversible attachment to the pump. The new, more efficient pump design reduces the pump volume by 49x, enabling implantation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Participatory Methods in Technical Design: Household Biomass Stoves</title>
<link href="https://hdl.handle.net/1721.1/155871" rel="alternate"/>
<author>
<name>Richmond, Robyn C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155871</id>
<updated>2024-08-02T03:30:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Participatory Methods in Technical Design: Household Biomass Stoves
Richmond, Robyn C.
Participatory Design represents an important methodology focused on involving people who experience problems in the process of defining and solving them. This is especially important in global development, where diverse stakeholders attempt to tackle poverty challenges. In this thesis, I analyze a case study of improving biomass stoves in the Himalaya through the lens of participatory design to inform design practice and research. Biomass cooking and heating cause high levels of indoor air pollution especially in the Himalaya where households need accessible and affordable wood fuel for cooking and heating during extreme winters. Prior to fieldwork, I facilitated ideation sessions to generate solutions to these challenges, and we pursued prototyping and testing of a chimney retrofit to a traditional stove. This incremental innovation had increased chances of long-term adoption and impact because it would not require users to change cooking practices or discontinue using their traditional stove. Lab testing resulted in several design guidelines, rather than optimized parameters, to enable fieldwork. In the field, the team co-designed a chimney clay stove with a lead user, trained under a local stove master in constructing improved clay stoves, and designed a one-pot clay chimney stove and modifications to metal chimney stoves using principles of participatory design. The chimney modification reduced indoor PM 2.5 and CO mass concentrations by 32.3% and 78.5%, respectively, while maintaining usability characteristics. Design experiences allowed the team to recognize the technical skills in materials and construction necessary for successful clay stove design and document cultural value placed on this expertise. The team also documented user innovations on stoves, which are sparse in literature, but further demonstrate the feasibility and value of increased user participation in designing improved stoves. Inspired by field work, I present a short review of literature on gender in biomass stove technology and recommendations to involve women and gender specialists in designing improvements to traditional stoves. In addition, I propose a new model for calculating thermal efficiency and a method for estimating space heating in biomass stoves used for cooking and heating. With the new model, clay multifunctional stoves can achieve up to 35% efficiency, which raises the standard for new stoves entering the market and better reflects actual usage and fundamentals of thermal efficiency.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Local and global numerical analysis of a porous screen in free-stream flow</title>
<link href="https://hdl.handle.net/1721.1/155870" rel="alternate"/>
<author>
<name>May-Varas, Nicholas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155870</id>
<updated>2024-08-02T03:41:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Local and global numerical analysis of a porous screen in free-stream flow
May-Varas, Nicholas A.
A porous screen in a free-stream flow poses a model system for the analysis of nets, as used in fishing and aquaculture. In such applications, the forces on the net inform operational design and safety choices, whereas the flow past the net relates to the mixing of flow and nutrients in the wake. Most existing analyses of the flow past porous screens are based on experiments or simplified potential flow models. While both methods can lead to insightful results, open questions related to the details of the flow field, viscous effects, and the accuracy of the simplified models remain. To address these questions, we set up and run high-fidelity numerical simulations of the free-stream flow past a two-dimensional model porous screen. The screen is formed by placing a series of solid circular cylinders orthogonal to a free-stream flow. As the number of cylinders is increased, the gaps between them decreases, which increases the solidity of the screen. The use of a free-space domain removes any artificial numerical blockage effects, consistent with a free-stream flow.&#13;
&#13;
Our analysis provides insights in the variation of the mean force coefficients as a function of the screen solidity, as well as their temporal fluctuations and spatial distributions across the screen. Further, we compute the flow rates and mean velocities through the screen gaps, and visualize the local flow field and wake. The results show that the mean value and fluctuations of the drag coefficients increase with the screen solidity. Further, the flow rate through the screen decreases monotonically as the solidity increases. The mean velocity through the screen behaves non-monotonically however, as it first increases and then decreases with screen solidity. Comparing our results to an existing potential model shows that the model predicts the flow rate well. However, the total drag coefficient is significantly lower in the predictions compared to the simulation results, pointing to the need for a better understanding of the pressure jump across the screen.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Engineering Education: Integration of the Desktop Fiber Extrusion Device (FrED) for Hands-On Learning in Smart Manufacturing.</title>
<link href="https://hdl.handle.net/1721.1/155869" rel="alternate"/>
<author>
<name>Jaiswal, Somesh Sunil</name>
</author>
<id>https://hdl.handle.net/1721.1/155869</id>
<updated>2024-08-02T03:26:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing Engineering Education: Integration of the Desktop Fiber Extrusion Device (FrED) for Hands-On Learning in Smart Manufacturing.
Jaiswal, Somesh Sunil
This thesis explores the integration of the Desktop Fiber Extrusion Device (FrED) into smart manufacturing education, emphasizing its transformative potential in engineering curricula. The research focuses on the development and application of educational and research-grade FrED models, designed to provide hands-on learning experiences remotely, which is increasingly pertinent in the evolving landscape of engineering education. Through iterative design and implementation of control systems, including Proportional-Integral-Derivative (PID) and Deep Reinforcement Learning (DRL), the study enhances the operational precision and educational utility of FrED. Furthermore, the introduction of an innovative, low-cost tension sensor in the fiber extrusion process represents a significant enhancement in monitoring and controlling the mechanical properties of extruded fibers, which is critical for understanding manufacturing dynamics. The thesis also proposes a structured coursework framework titled "Remote Monitoring and Control in Smart Manufacturing" that utilizes FrED to teach key concepts of smart manufacturing. This coursework is designed to equip students with the skills to operate advanced manufacturing tools and analyze real-time data for process optimization. The findings demonstrate that FrED not only supports the theoretical and practical education of engineering students but also serves as a bridge to high-tech industrial applications, making it a pivotal tool in the digital transformation of manufacturing education. This work lays the groundwork for future research on the scalability of such educational tools and their integration into different educational settings globally, potentially democratizing access to cutting-edge engineering education.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Co-Design of Resource Limited Genetic Networks Tuning System Parameters to Satisfy Specifications</title>
<link href="https://hdl.handle.net/1721.1/155868" rel="alternate"/>
<author>
<name>Celeste Junior, Carlos Eduardo</name>
</author>
<id>https://hdl.handle.net/1721.1/155868</id>
<updated>2024-08-02T03:37:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Co-Design of Resource Limited Genetic Networks Tuning System Parameters to Satisfy Specifications
Celeste Junior, Carlos Eduardo
Modular composition is a very powerful and widely used tool in engineering disciplines, as it aids in maintaining the system complexity tractable. Its main idea is that parts of the systems can be encapsulated into black box models characterized only by its input to output behavior, which eliminates the need to consider the complex dynamics inside the black box. Moreover, this process can be done iteratively, allowing the design of highly complex systems, such as computer chips. But this powerful tool is not always available, like in synthetic biology, where engineered systems in cells have very complex and intricate interconnections between subsystems, which makes encapsulating parts of theses systems a very challenging endeavor. There are many reasons for this failure in modularity in biological systems, such as load effects (retroactivity), unknown interactions and resource competition, which is our focus for this work. Recent efforts to achieve modular design in systems with resource competition, have focused in adding additional machinery to the cell to either try to isolate the subsystems or control the availability of the shared resource. In this work we explore a co-design approach, where instead of adding additional machinery to the cell, we aim to tune some systems parameters to satisfy some specification. To this end we provide conditions on the systems parameters for a network of subsystems to meet a given specification, which are derived using mathematical logic and ideas on how to tackle similar problems. With this, this work lays the foundations for further development of co-design techniques for genetic networks with production and/or degradation resources, where one may be able to mitigate the effects of one type of resource sharing by tuning the other.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frequency Modulated Continuous Wave Radar Based Fall Risk Monitoring System</title>
<link href="https://hdl.handle.net/1721.1/155865" rel="alternate"/>
<author>
<name>Copeland, Daniel Ilan</name>
</author>
<id>https://hdl.handle.net/1721.1/155865</id>
<updated>2024-08-02T03:38:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Frequency Modulated Continuous Wave Radar Based Fall Risk Monitoring System
Copeland, Daniel Ilan
Falls represent a significant health risk, especially for the elderly. Fortunately, interventions have been shown to decrease falls when clinicians identify at-risk patients. However, factors such as medication changes, illness, and injuries can rapidly increase fall risk, making timely clinical identification and subsequent interventions challenging to implement. Our study introduces a comprehensive approach to assessing fall risk using a frequency-modulated continuous-wave (FMCW) radar system, addressing the need for frequent, low-cost, longterm balance monitoring solutions. This technology is compared with ground-truth contactbased lab sensors like force plates and motion capture systems, establishing a foundation for accurate balance assessments in home settings. In our cross-sectional analysis, participants performed the one-legged stand test (OLST) with simultaneous data collection from FMCW radar, force plates, and motion capture systems. By integrating the FMCW radar with machine learning algorithms, we achieved a 98.4% accuracy in identifying OLST foot movements and an R-squared of 0.70 in predicting force plate patterns, demonstrating the system’s nuanced capability for balance performance evaluation. Additionally, we examine the efficacy of combining radar technology with machine learning to identify movements similar to those performed in fitness, clinical, and rehabilitation settings. We also explore the use of simulations for optimizing radar system configurations. This thesis demonstrates the effectiveness of FMCW radar technology in laboratory settings and its potential for home-based health monitoring. The study highlights the transformative potential of integrating radar technology with machine learning through detailed experimentation and analysis, offering a versatile tool for health monitoring and fall risk assessment.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Data from the U.S. Shipbuilding Industry and Application to Improve Performance Metrics</title>
<link href="https://hdl.handle.net/1721.1/155863" rel="alternate"/>
<author>
<name>Willis, Heather L.</name>
</author>
<id>https://hdl.handle.net/1721.1/155863</id>
<updated>2024-08-02T03:43:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of Data from the U.S. Shipbuilding Industry and Application to Improve Performance Metrics
Willis, Heather L.
The U.S. Navy is seeking to increase the number of ships in the fleet due to growing threats, however shipyards are facing numerous issues leading to a delay in the delivery of naval warships along with cost overruns. At the same time, there is significant data available from the construction process, creating an opportunity for data analysis with the intention of identifying and hopefully resolving some of these issues. Addressing these concerns, this thesis scrutinizes Earned Value Management (EVM) data from actual shipbuilding projects, capitalizing on the datasets available to help identify the root causes of such delays. The study begins with data cleaning, an essential step that ensures the real-world data’s integrity and relevance. Preliminary data analysis was then conducted to explore cost variance, schedule adherence, and the learning curve effect observed across different hulls, setting the stage for deeper investigative modeling. Following model exploration and selection, the core of the thesis is a predictive model that uses polynomial and linear regression to predict the progression of costs over time and comparison to the prediction metrics currently in use. A regression model was chosen over more complex models like a long short-term memory (LSTM) neural network due to its simplicity, interpretability, and ease of retraining with new data, ensuring that stakeholders can readily understand and apply the model’s insights while maintaining its relevance over time. The target prediction metric for this model is the Actual Cost of Work Performed (ACWP), however similar models could also be leveraged to predict schedule. In creating this model, several features were analyzed including both the Budgeted Cost of Work Scheduled (BCWS) and the Budget at Completion (BAC), both known metrics at the start of construction. After testing various combinations of these features and comparing the mean squared error (MSE), the chosen model uses time and BCWS divided by BAC as input features, serving as a budgeted completion percentage. The model is tailored further to reflect industry-specific cost behaviors, enforcing non-negative, cumulative cost predictions. This model was trained, tested and validated using EVM data from one key event (KE), a specific subset of the overall ship construction process with the intent that it could be applied to all key events and aggregated to provide cost predictions for an 2entire hull. This thesis will ideally serve as a framework for shipyards to improve project cost predictions and identify indicators of large cost overruns early enough to correct them within the ship construction timeline.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, Analysis and Modeling of a Modular Navy Integrated Power and Energy Corridor Cooling System</title>
<link href="https://hdl.handle.net/1721.1/155862" rel="alternate"/>
<author>
<name>Meyers, Wade T.</name>
</author>
<id>https://hdl.handle.net/1721.1/155862</id>
<updated>2024-08-02T03:33:41Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design, Analysis and Modeling of a Modular Navy Integrated Power and Energy Corridor Cooling System
Meyers, Wade T.
In response to the escalating demand for electricity onboard future naval vessels, the Design Laboratory of the Massachusetts Institute of Technology (MIT) Sea Grant Program, as part of a U.S. Navy research consortium for next-generation all-electric warships, is pioneering the development of the Navy Integrated Power and Energy Corridor (NiPEC). This innovative system is designed to enhance the power distribution capabilities of warships like the forthcoming DDG(X), which is expected to require significant electrical power to support advanced offensive and defensive systems. NiPEC features a network of modular compartments that independently or collectively perform energy storage, conversion, protection, control, isolation, and transfer functions. Central to this system is the integrated Power Electronics Building Block (iPEBB), a self-contained, power-dense converter tailored to manage the ships' stochastic and dynamic loads efficiently. However, realizing the full potential of iPEBB's advanced semiconductor technology presents significant challenges, particularly in thermal management. This aspect is further complicated by the constraints imposed by indirect liquid cooling methods and the necessity for sailor-friendly design considerations. Preliminary analyses by Padilla et al. on heat dissipation strategies, as well as Reyes' and Chaterjee’s subsequent design proposal for a NiPEC liquid cooling system highlight the operational and maintenance challenges in cooling the system's numerous components. &#13;
&#13;
This thesis presents a comprehensive approach to designing a modular, compact, and indirect liquid cooling system for the NiPEC to be deployed across future all-electric Navy destroyer warships. Leveraging a combination of first-principles thermodynamic analysis, multi-physics-based modeling, and numerical analysis, the study builds upon Reyes' and Chaterjee’s preliminary design to propose enhanced cooling system architectures that meet stringent military standards while ensuring robust thermal management. Further, the design and detailed analysis of this compact heat exchanger significantly contribute to enabling the modular construction of the NiPEC cooling system alongside the concurrent assembly of the NiPEC electrical system. This investigation also delves into the extraction and application of response surface models that elucidate the dynamic interdependencies among various response variables—such as the overall heat transfer coefficient and heat transfer rates—arising from changes in explanatory variables like inlet velocities, temperatures, and the specific geometry of the heat exchanger. This multifaceted analysis not only refines the cooling system's efficiency but also aligns it with the modular integration requirements of military naval applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring Learning on the Job</title>
<link href="https://hdl.handle.net/1721.1/155860" rel="alternate"/>
<author>
<name>Liu, Jiageng</name>
</author>
<id>https://hdl.handle.net/1721.1/155860</id>
<updated>2024-08-02T03:12:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Measuring Learning on the Job
Liu, Jiageng
I study on-the-job learning at IT firms. Using detailed online activity data of 144,000 employees matched with 25,000 firms across the globe, I measure the intensity and the direction of technology acquisition, a key input to innovation. A standardized measure shows that employee-entrepreneurs who join small, young firms spend more time learning about new software than similar employees who join large incumbent firms. They engage with more diverse and rarely combined topics, behaviors that are found to be associated with more radical innovations. Within firms, more actively learning employees work on more projects and start reviewing others' code sooner. The results are consistent with channels of firm-employee matching and job security at incumbent firms. They complement Akcigit and Goldschlag (2023) that finds inventors apply for fewer patents and receive higher wages after joining incumbent firms. A heterogeneous supply of unobserved learning cannot explain all results. I also document life-cycle patterns of learning behavior that are consistent with predictions of the standard labor theory. Such predictions had been challenging to test past formal education.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Multi-Salt Transport and Salt Leakage Pathways in Bipolar Membrane Electrodialysis for Brine Valorization</title>
<link href="https://hdl.handle.net/1721.1/155859" rel="alternate"/>
<author>
<name>Wegmueller, Jakob Max</name>
</author>
<id>https://hdl.handle.net/1721.1/155859</id>
<updated>2024-08-02T03:23:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of Multi-Salt Transport and Salt Leakage Pathways in Bipolar Membrane Electrodialysis for Brine Valorization
Wegmueller, Jakob Max
Bipolar membrane electrodialysis (BMED), a process which converts a concentrated saline feed into acidic, basic, and desalinated streams, has promising applications across resource recovery and brine valorization. BMED can be used to produce valuable acids and bases from reverse osmosis or nanofiltration concentrate while desalinating the waste brine and reducing disposal costs. In the first part of this thesis, we assess the feasibility of applying BMED to the nanofiltration permeate of groundwater that contains a high concentration of nitrate and sodium chloride. We analyze the transport of different ions in the mixed solution and compare the performance of the mixed salt permeate to an idealized single salt feed. BMED was shown to be just as effective at producing acid and base from the composition of polluted groundwater composition as from a single salt solution. BMED appears to be a feasible means to create value from and reduce the volume of waste brine in this application. The second part of this thesis examines the transport of salt impurities in the produced acid and base stream. BMED membranes allow small amounts of salt leakage that lower the purity and value of the acid and base generated. Impurities in the base stream may originate from the feed stream or the acid stream. While the total concentration of impurities in the base stream can be tracked, in conventional BMED operation pinpointing the origin of those impurities is not possible without making presumptions. A novel membrane stack and method is proposed for distinguishing between and measuring the flux of salt leakage from the acid and feed streams into the base stream (the same analysis is also done for the acid stream). For feed concentrations between 0.25-2.25 M and current densities from 10-100 mA/cm2, the impurity fluxes from the two sources are always the same order of magnitude and neither is negligible. Furthermore, lowering the feed stream concentration and operating at a higher current density decreased the net flux of impurities, resulting in a higher acid and base purity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for Testing COLREGS Compliance in AutonomousSurface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155858" rel="alternate"/>
<author>
<name>Molina, Mikala N.</name>
</author>
<id>https://hdl.handle.net/1721.1/155858</id>
<updated>2024-08-02T03:26:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Methods for Testing COLREGS Compliance in AutonomousSurface Vehicles
Molina, Mikala N.
Globally there is an increasing number of uncrewed and autonomous surface vessels operating at sea. Preventing collisions at sea is of paramount importance to safeguard lives, protect the marine environment, and maintain smooth maritime operations. Effectively preventing collisions at sea between manned and uncrewed vessels requires that uncrewed vessels maneuver in a manner that is both safe and predictable to human mariners. Consequently, there is a pressing need to develop a comprehensive testing architecture that rigorously evaluates and verifies the level of International Rules for Preventing Collision at Sea, or "Collision Regulations" (COLREGS) compliance of autonomous marine vehicles. To address the critical need for COLREGS compliance verification in Autonomous Surface Vehicle (ASV), this thesis introduces test cases. These test cases are designed to assess the ability of autonomous vessels to respond appropriately to various navigational scenarios and interactions with conventional, manned vessels. The development of test cases draws upon historical collision data, navigational incidents, and expert knowledge to encompass a wide range of real-world situations and with simplicity of real-world implementation in mind.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Challenges of Volume Controlled Cavity Expansion (VCCE) for In-Vivo Tissue Testing</title>
<link href="https://hdl.handle.net/1721.1/155857" rel="alternate"/>
<author>
<name>Spaeth, Katherine Charlotte</name>
</author>
<id>https://hdl.handle.net/1721.1/155857</id>
<updated>2024-08-02T03:08:29Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Addressing Challenges of Volume Controlled Cavity Expansion (VCCE) for In-Vivo Tissue Testing
Spaeth, Katherine Charlotte
The prevalence of Traumatic Brain Injuries (TBI’s) is a serious health concern to U.S. Military Members. Mild TBI’s, some of which have been shown to result from prolonged exposure to repeated artillery blasts, are particularly challenging to identify with existing diagnostic imaging technology. In general, as with other soft-tissued organs, there exists a gap in understanding of how biological tissues deform under extreme loading conditions. Understanding these mechanics has applications beyond diagnosing physical bodily injuries as diseased tissues have also been shown to demonstrate differing mechanical p roperties. Volume Controlled Cavity Expansion (VCCE) is a novel, needle-based probing methodology developed to capture rate dependent ex-vivo and in-vivo tissue material properties. In this thesis, the VCCE methodology was performed on numerous animal tissues as well as extracted human thyroids to study some of the challenges related to the translation of the VCCE lab technique into a medical diagnostics tool. To ensure a successful VCCE test, it was shown that choice of needle and the insertion protocol must be altered depending on the type of biological tissue being tested. Additionally, in a clinic setting, VCCE was demonstrated as a successful methodology in differentiating between a diseased and healthy tissue. Using the mechanics-informed in-vivo tissue probing method, VCCE has applications for improved assessment and diagnostic tools for injured and/or diseased tissues, Personal Protective Equipment (PPE), and casualty transport safety guidelines.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automation of In-Bed Repositioning, Assistance to Sitting, and Transfer for Bedridden Patients via Robot Arms and Strap Interface</title>
<link href="https://hdl.handle.net/1721.1/155855" rel="alternate"/>
<author>
<name>Blake, Kaleb</name>
</author>
<id>https://hdl.handle.net/1721.1/155855</id>
<updated>2024-08-02T03:07:33Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automation of In-Bed Repositioning, Assistance to Sitting, and Transfer for Bedridden Patients via Robot Arms and Strap Interface
Blake, Kaleb
Mobility and immobility as fundamental aspects of a patient’s health. There are several factors that contribute to mobility impairments, including various medical conditions and injuries. Prolonged immobility has detrimental effects many of the body’s vital organ systems and decreases quality of life in general. Caregivers work to help patient with different levels of mobility perform necessary tasks. Severely immobile or bedridden patients are the most difficult to handle. Caregivers often experience musculoskeletal disorders lifting injuries in their line of work. Assistive devices were made to mitigate this, but their usage in practice is still limited, so caregiver injuries are still prevalent. This thesis presents a new idea that can automate in-bed motion, assistance to seated positions, and transfer for patients with severe immobility. Comfortable straps that wrap around the patient’s upper torso and thighs will be held by robot arms. The robot arms will perform movements that can control the torso and thigh angles, hip position in and out of the bed plane, and the normal force the bed provides at the hip. The control techniques described in this paper include closed loop control of a quasi-static formulation of the system and model-reference adaptive trajectory control. The results show there is promise in these methods to automate assistance of bedridden patients.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Modeling of Offshore Nuclear Platform Fuel and Transfer System</title>
<link href="https://hdl.handle.net/1721.1/155854" rel="alternate"/>
<author>
<name>Allison, Asia</name>
</author>
<id>https://hdl.handle.net/1721.1/155854</id>
<updated>2024-08-02T03:21:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design and Modeling of Offshore Nuclear Platform Fuel and Transfer System
Allison, Asia
The design and modeling of the fuel and transfer system aboard the Offshore Nuclear Platform (OFNP) aims to integrate Small Modular Reactors (SMRs) into an offshore setting. This endeavor is in line with global initiatives to mitigate temperature rise by 2050, using nuclear energy for electrical generation “Key Aspects of the Paris Agreement | UNFCCC.” A specific focus on high-temperature gas reactors, notably pebble bed reactors for their compact fuel form and capability for at-power fuel replenishment through TRISO fuel pebbles recirculation will be evaluated.&#13;
This study will review both historical and current high-temperature gas reactors, focusing on their development, application, and operational efficiencies. Special emphasis will be placed on methods for managing spent fuel, including storage and environmental considerations. The research will develop the platform’s conceptual fuel transfer system which details the design of fuel storage, handling and at sea transfer system, ensuring safety, particularly during at-sea offloading operations.&#13;
Additionally, the thesis will assess the platform's stability, transfer system structural integrity, and shielding design to determine its feasibility for offshore energy generation for the reactor’s lifetime.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Method for Photopolymerization 3D Printing of Recyclable Thermoplastic Polymers</title>
<link href="https://hdl.handle.net/1721.1/155850" rel="alternate"/>
<author>
<name>Tumkur Mahesh, Prajwal</name>
</author>
<id>https://hdl.handle.net/1721.1/155850</id>
<updated>2024-08-02T03:16:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Method for Photopolymerization 3D Printing of Recyclable Thermoplastic Polymers
Tumkur Mahesh, Prajwal
Conventional light-based processes used in additive manufacturing (AM), such as vat polymerization, yield non-recyclable thermoset polymers, which pose sustainability issues at scale. This thesis studies a method for photopolymerization 3D printing of the common polymers polyacrylonitrile (PAN) and polymethyl methacrylate (PMMA) to address the growing demand for low-waste production of high-resolution polymer parts with complex geometries in industrial-scale manufacturing. This new approach not only produces directly recyclable linear thermoplastic polymers but also enables the light-based printing of polymers soluble in their own monomer. &#13;
&#13;
It was previously demonstrated by Chazot et al. that photo-defined layers of polyacrylonitrile (PAN) can be formed at a liquid-liquid interface; this technique was named interfacial photopolymerization (IPP). In this thesis, which focuses on multilayer 3D printing (3D-IPP), the resolution and stability of layers formed by IPP are improved using a light-absorbing dye while incorporating a water-soluble polyethylene glycol binder to improve yield, printing speed, and mechanical properties. Joint initiation using commercial water-soluble photoinitiators V-50 and LAP, along with the addition of HCL and CaCl2, further enhances printing performance by producing dense layers and reducing voids. Post-processing techniques are devised to preserve part geometry after printing, including controlled air drying, thermal post-processing with PEG infiltration, and the inclusion of compatible polymeric binders in the printing composition to minimize cracking and shrinkage. Additionally, hardware is developed to integrate the IPP process into a commercial projector-based 3D printer, demonstrating compatibility of the proposed chemistry with off-the-shelf hardware. The capability to digitally manufacture high resolution 3D structures with IPP is demonstrated and the physical properties of the resulting composite polymer are characterized.  While 3D-IPP cannot yet directly rival conventional manufacturing methods, the benign aqueous chemistry as well as recyclability and circularity of produced parts offers a promising path towards sustainable and resource-efficient AM as the technology matures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating optical microplastic detection methods using fluorescent staining through Nile Red</title>
<link href="https://hdl.handle.net/1721.1/155849" rel="alternate"/>
<author>
<name>Prasad, Suparnamaaya</name>
</author>
<id>https://hdl.handle.net/1721.1/155849</id>
<updated>2024-08-02T03:12:53Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Investigating optical microplastic detection methods using fluorescent staining through Nile Red
Prasad, Suparnamaaya
Microplastics (MPs) are small pieces of plastic debris typically defined as smaller than 5mm. Given that the global environment faces a growing plastic pollution crisis, an urgent need exists for rapid, low-cost microplastic detection systems to characterize the health and environmental risk posed by MPs. Fluorescent tagging of microplastics using Nile Red (NR) has recently emerged as an accessible and popular detection method. However, robust, standardized methods of using Nile Red to identify between plastic and organic materials or distinguish between polymers are still being developed. This thesis pursued different optical microplastic detection methods using NR-based fluorescent staining with the ultimate goal of providing data that could be used towards building a polymer identification model that could be implemented into a low-cost detection system. Three different investigations are presented. First, the fluorescence emission spectra of various plastic and organic samples stained with Nile Red is presented. The motivation behind this study was to identify the strongest fluorescence emission peaks for NR-stained plastics under a series of different excitation wavelengths. The spectral results provide a preliminary basis to distinguish Nile Red-stained plastics based on their fluorescent emission spectra alone. Second, this thesis presents a low-cost imaging set-up for fluorescent samples. The system applies the same excitation wavelengths and optical filters used to collect the spectral data. The images are then combined with the spectral data to illustrate another basis for rapidly distinguishing between different plastic polymers. Finally, an optical method for detecting microplastics in liquid samples using photodiodes is explored and discussed. Overall, this thesis contributes to the development of accessible microplastic detection technologies by leveraging the fluorescent properties of NR-stained plastics. The findings highlight the challenges and potential solutions for distinguishing between plastics and organic materials and distinguishing between different plastic polymers.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Relevance for Enhanced Human-Robot Collaboration</title>
<link href="https://hdl.handle.net/1721.1/155847" rel="alternate"/>
<author>
<name>Hernandez-Cruz, Vanessa</name>
</author>
<id>https://hdl.handle.net/1721.1/155847</id>
<updated>2024-08-02T04:03:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Bayesian Relevance for Enhanced Human-Robot Collaboration
Hernandez-Cruz, Vanessa
Intent prediction is a difficult yet critical component for seamless Human Robot Collaboration (HRC). As robots become increasingly involved in helping humans with a variety of tasks, ranging from part assembly to healthcare and more, it is crucial to model and understand human intention. Many works still do not take advantage of the inherent relationships between objects, task, and the human model. Current human intent prediction methods, such as Gaussian Mixture Models and Conditional Random Fields, are generally less interpretable due to their lack of causality between variables. A novel framework called Bayesian Relevance (BR) is presented for human intent prediction in HRC scenarios. The complexity of intent prediction is captured by modeling the correlation between human behavior convention and scene data. The proposed method leverages inferred intent predictions to optimize the robot’s response in real-time, ensuring smoother and more intuitive collaboration. In this paper, we use a Bayesian Network to predict human intent from a multi modality information framework. A demonstration of a HRC task, using a UR5 robot, exemplifies BR’s real-time human intent prediction and collision avoidance. Evaluations demonstrate that our multi-modality BR model predicts human intent within 2.69ms with a 36% increase in precision, a 60% increase in F1 Score, and an 85% increase in accuracy compared to its best baseline method.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavior of a lumpy artificial transmission line as the frequency is indefinitely increased</title>
<link href="https://hdl.handle.net/1721.1/155828" rel="alternate"/>
<author>
<name>Clarke, Edith,
            1883-1959.</name>
</author>
<id>https://hdl.handle.net/1721.1/155828</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1919-01-01T00:00:00Z</published>
<summary type="text">Behavior of a lumpy artificial transmission line as the frequency is indefinitely increased
Clarke, Edith,
            1883-1959.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1919
</summary>
<dc:date>1919-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two-shock interaction in a region of nonuniform flow</title>
<link href="https://hdl.handle.net/1721.1/155822" rel="alternate"/>
<author>
<name>Miller, Walter Daniel.</name>
</author>
<id>https://hdl.handle.net/1721.1/155822</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Two-shock interaction in a region of nonuniform flow
Miller, Walter Daniel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1964; Includes bibliographical references (leaves 24-25).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics and control of a spatial trade model,</title>
<link href="https://hdl.handle.net/1721.1/155819" rel="alternate"/>
<author>
<name>Hager, William W.,
            1948-</name>
</author>
<id>https://hdl.handle.net/1721.1/155819</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Dynamics and control of a spatial trade model,
Hager, William W.,
            1948-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1971; Bibliography: leaf 52.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>European strategic alliances and cross border cross shareholdings</title>
<link href="https://hdl.handle.net/1721.1/155818" rel="alternate"/>
<author>
<name>De Marchi, Edoardo.</name>
</author>
<id>https://hdl.handle.net/1721.1/155818</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">European strategic alliances and cross border cross shareholdings
De Marchi, Edoardo.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1993; Includes bibliographical references (leaves 87-88).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interlaminar fatigue of fiber-reinforced laminates.</title>
<link href="https://hdl.handle.net/1721.1/155817" rel="alternate"/>
<author>
<name>Handy, Rodney Neal.</name>
</author>
<id>https://hdl.handle.net/1721.1/155817</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Interlaminar fatigue of fiber-reinforced laminates.
Handy, Rodney Neal.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1971; Bibliography: leaf 24.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A methodology for manufacturing process improvement</title>
<link href="https://hdl.handle.net/1721.1/155815" rel="alternate"/>
<author>
<name>Chinnaswamy, Mano H.
            (Mano Haran)</name>
</author>
<id>https://hdl.handle.net/1721.1/155815</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">A methodology for manufacturing process improvement
Chinnaswamy, Mano H.
            (Mano Haran)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (p. 73-74).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A non-orthogonal gyro configuration</title>
<link href="https://hdl.handle.net/1721.1/155812" rel="alternate"/>
<author>
<name>Gilmore, Jerold Philip.</name>
</author>
<id>https://hdl.handle.net/1721.1/155812</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1967-01-01T00:00:00Z</published>
<summary type="text">A non-orthogonal gyro configuration
Gilmore, Jerold Philip.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1967; Three blank p. included in paging.; Bibliography: p. 199-202.
</summary>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of the line-spring model to cracks with partial closure</title>
<link href="https://hdl.handle.net/1721.1/155810" rel="alternate"/>
<author>
<name>Luz, James John.</name>
</author>
<id>https://hdl.handle.net/1721.1/155810</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1985-01-01T00:00:00Z</published>
<summary type="text">Application of the line-spring model to cracks with partial closure
Luz, James John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1985; Includes bibliographical references.
</summary>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensor Evaluation and Fleet Modeling of Long-Range&#13;
Low-Cost Autonomous Surface Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155656" rel="alternate"/>
<author>
<name>Nothacker, John S.</name>
</author>
<id>https://hdl.handle.net/1721.1/155656</id>
<updated>2024-07-11T03:06:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sensor Evaluation and Fleet Modeling of Long-Range&#13;
Low-Cost Autonomous Surface Vehicles
Nothacker, John S.
This thesis examines the development and assessment of sensor configurations for Long-Range Low-Cost Autonomous Surface Vehicles (ASVs) with a focus on Maritime Domain Awareness (MDA) applications. Utilizing the Platform for Expanding AUV exploRation (PEARL) as a model, the study systematically evaluates various sensor options to identify optimal suites for MDA operations. Through an analysis of 255 sensor combinations, considering factors such as range, power consumption, field of view, resolution, and cost, this research identifies key sensor configurations that maximize operational utility while minimizing cost. The research identified that sensors should include a RADAR, AIS, IR cameras, and visual light cameras, allowing operation in all lighting and weather conditions. The study further explores fleet modeling for two MDA use cases—The Littorals and Open Ocean scenarios—providing insights into the cost-effectiveness and coverage efficiency of deploying fleets of sensor-equipped PEARL units. The fleet modeling demonstrated that these low-cost ASVs can cover approximately 20 times the area of a Saildrone Voyager for about the same capital cost. The findings contribute to the advancement of low-cost ASV technology for enhanced maritime surveillance and data collection, offering scalable solutions to maritime domain challenges.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Trade-offs and Emergent Properties of Heterogeneous Swarms of Maritime Robot Systems through Empirical Analysis and Application-Driven Experiments</title>
<link href="https://hdl.handle.net/1721.1/155655" rel="alternate"/>
<author>
<name>Hoang, Thinh B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155655</id>
<updated>2024-07-11T03:28:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploring Trade-offs and Emergent Properties of Heterogeneous Swarms of Maritime Robot Systems through Empirical Analysis and Application-Driven Experiments
Hoang, Thinh B.
Multi-agent systems present a promising approach to addressing challenges such as searching for and tracking moving targets, offering advantages like robustness and scalability over single-agent solutions. Current maritime searching and tracking strategies typically involve employing predetermined paths for exploring the search space or adopting Particle Swarm Optimization (PSO) algorithms for multi-robot systems (MRS). While these approaches often entail homogeneous deployment of algorithms or behaviors across all agents, the potential benefits of introducing heterogeneity still need to be explored. Specifically, varying agent behavior or capabilities could enhance mission performance, but the trade-offs involved are not thoroughly understood beyond trivial outcomes like adjusting agent speed.&#13;
&#13;
In this thesis, a novel swarming approach is used to tackle two core missions: dynamic target searching and tracking, and isocontour identification. The strategy employs a combination of five distinct algorithms. The innovation lies in introducing heterogeneity among agents by assigning specific roles managed through varying weights tied to each algorithm. Trade-offs between mission performance and cost are quantified by simulating a swarm with diverse roles and behaviors. Key performance metrics include the accuracy of target position estimation, convergence time on target, duration of target tracking, and correlation between the swarm's collective heading and target bearing. The overall energy consumption of the swarm determines the cost metric. Investigation of the impact due to different proportions of agent types within the swarm, provided valuable insights into how to optimize mission effectiveness while managing resource constraints.&#13;
&#13;
Overall, this research contributes to advancing the understanding of how heterogeneity in multi-agent systems can enhance mission performance and offers practical insights into optimizing resource allocation in complex tasks such as target search and tracking. By comprehensively assessing trade-offs, this thesis aims to pave the way for more efficient and adaptable multi-agent systems in real-world applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Vehicle Platform Architecting Process: Will Model Based Systems Engineering help organizations with the architectural transition from ICE to battery power?</title>
<link href="https://hdl.handle.net/1721.1/155653" rel="alternate"/>
<author>
<name>Melgarejo Oviedo, Carlos Edoardo</name>
</author>
<id>https://hdl.handle.net/1721.1/155653</id>
<updated>2024-07-11T03:32:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Vehicle Platform Architecting Process: Will Model Based Systems Engineering help organizations with the architectural transition from ICE to battery power?
Melgarejo Oviedo, Carlos Edoardo
The CO₂ emission reduction regulation set for automakers led them to develop BEVs on modified Internal Combustion Engine (ICE) platforms between the late 1990’s and early 2010’s. However, a long-term strategy to be competitive on range in the market demanded the development of BEV-dedicated platforms. Legacy OEMs (Toyota, VW, GM, and others), in theory, had deep process experience architecting vehicle platforms. The challenge from their perspective was to adapt that process to a new architecture and power source and to react to other recent technologies trending in the market. By contrast, the new market entrants (Tesla, Rivian, BYD, etc.) had almost no process experience but were unencumbered by compatibility with legacy platforms. The modules and software required to power BEVs make them more complex than ICEVs despite the fewer pieces in their powertrain. A series of interviews with Systems Engineering experts in the automotive industry were held to understand the differences in the architecting process between ICEs and BEVs. In the study, 45% of the interviewees claimed that BEVs are more challenging to architect than ICEVs, another 45% stated exactly the opposite, and the remaining 10% stated that the difficulty level is the same for both. Additionally, over 80% of the participants stated that an architectural change in a BEV is as smooth as in an ICEV. The study suggests that the architecting difficulty perception between ICEVs and BEVs is linked to the experience of the companies as well as some practices like module incompatibility tracking and key interface identification that happen during the architecting process.&#13;
Model-Based Systems Engineering (MBSE) is a methodology largely developed in the Aerospace industry, but which presents a potential solution for managing the increased complexity of BEV-dedicated platforms. During an MIT MBSE course, 5,379 professionals have been surveyed from 2017 to 2024. The data from these surveys was analyzed to identify trends in MBSE adoption over time. The study revealed that MBSE adoption has increased at a rate of 4.11% annually across several industries and has been used primarily for the transformation of processes and workflows. The MBSE implementation in the automotive industry is around 15% higher than in other sectors. However, it has not experienced growth over the past four years. The study also suggests that the main challenges to extending the MBSE adoption are the lack of guidelines and the low credibility of models. Nevertheless, survey respondents remain positive about this approach; between 60% and 70% of them think that their companies should implement MBSE, suggesting a future increase in its adoption and an important role of this methodology in managing new BEV-dedicated platforms' complexity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Pharmaceutical Companies Utilize Platform Strategy: A Study of the COVID-19 mRNA Vaccine Development</title>
<link href="https://hdl.handle.net/1721.1/155647" rel="alternate"/>
<author>
<name>Aoki, Tomonoshin</name>
</author>
<id>https://hdl.handle.net/1721.1/155647</id>
<updated>2024-07-11T03:32:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">How Pharmaceutical Companies Utilize Platform Strategy: A Study of the COVID-19 mRNA Vaccine Development
Aoki, Tomonoshin
This paper employs platform theory to investigate why Moderna and BioNTech were able to develop the COVID-19 vaccine so rapidly and provides new insights into platform theory. The COVID-19 pandemic, which began in China in late 2019, spread globally within a month, causing immense damage. Vaccines were the most critical technology needed to fight the disease. Typically, vaccine developments take more than a decade; however, Moderna and BioNTech/Pfizer successfully developed an mRNA vaccine within approximately 300 days of the pandemic's onset. In contrast, Daiichi Sankyo required around 1,000 days to develop a vaccine using the same mRNA technology. This paper utilizes platform theory as a framework to examine the factors contributing to this disparity. Various internal and external factors from the perspective of a pharmaceutical company can be considered behind the rapid development of vaccines. However, this study focuses mainly on internal factors, especially from the perspective of platform theory from management perspectives. Platform theory has emerged as a crucial framework for understanding the dynamics of modern businesses and technologies. This theory distinguishes between three primary types of platforms: product-level platforms, industry-level platforms, and digital platforms. In the pharmaceutical industry, mRNA technology can be considered a product- or technology-level platform. This is because by modifying the mRNA sequences, it will be a wide range of therapeutics targeting different diseases, not only infectious diseases but also cancers or other diseases. Although this study regarded mRNA technology as a product or technology-level 4 platform, we would like to discuss how it has led to the connection to the industry-level platform or digital platform through the COVID-19 vaccine development process. In regard to the COVID-19 vaccine development story, the question that naturally arises is, 'Why could Moderna and BioNTech develop the vaccine so rapidly?' The answer must be they executed the necessary steps for vaccine development rapidly. These steps, namely 'Discovery' (development of vaccine candidate substances), 'Development' (conducting clinical trials and obtaining regulatory approval), and 'Manufacturing' (production of vaccines), were all carried out swiftly and in parallel. These steps were executed so rapidly because, at the time of the pandemic, Moderna and BioNTech already had the financial and human resources, knowledge and patents, development experiences, digital infrastructures, efficient production facilities, influential partners, and a rational corporate culture for the project. Then, the next question should be: why did Moderna and BioNTech have such organizational capabilities at the outbreak of the pandemic? In this paper, we examine in detail why and how such capabilities were nurtured after these companies were founded. Also, we examine the academic history even before the companies were founded and why and how Moderna and BioNTech were founded as mRNA platform companies. In conclusion, this study demonstrates the importance of the pharmaceutical industry harnessing the "power of the platform" and provides concrete directions for leveraging its potential. The discussion should be expanded to explore how companies and policies can work together to address the health and healthcare challenges facing people around the world, utilizing the power of platforms to drive innovation, collaboration, and, ultimately, better health outcomes for all.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Segmentation and Analysis of High-Speed Video Phase-Detection Data for Boiling Heat Transfer Characterization Using U-Net Convolutional Neural Networks and Uncertainty Quantification</title>
<link href="https://hdl.handle.net/1721.1/155645" rel="alternate"/>
<author>
<name>Maduabuchi, Chika</name>
</author>
<id>https://hdl.handle.net/1721.1/155645</id>
<updated>2024-07-11T03:30:45Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automated Segmentation and Analysis of High-Speed Video Phase-Detection Data for Boiling Heat Transfer Characterization Using U-Net Convolutional Neural Networks and Uncertainty Quantification
Maduabuchi, Chika
Boiling heat transfer is a complex phenomenon used for cooling and heat management purposes in various industrial applications, such as nuclear reactors. Accurate characterization and understanding of boiling dynamics are essential for the design and optimization of heat transfer systems. High-speed video (HSV) imaging is a powerful tool for capturing the intricate details of boiling processes. However, the manual analysis of HSV data is time-consuming and prone to subjective interpretation. This thesis presents a novel approach for the automated segmentation and analysis of HSV phase-detection images using U-Net Convolutional Neural Networks (CNNs) and uncertainty quantification techniques. The proposed methodology involves the development of specialized U-Net CNN models for segmenting HSV data of boiling phenomena in different fluids, including liquid nitrogen, argon, FC-72, and high-pressure water, under various experimental conditions. The performance of the U-Net models is evaluated and compared with traditional adaptive thresholding techniques. The results demonstrate the superior accuracy and robustness of the U-Net models in identifying and delineating bubbles compared to manual segmentation, particularly in scenarios involving smaller bubbles and complex bubble topologies. To assess the reliability of the calculated boiling metrics, such as contact line density and dry area fraction, a comprehensive uncertainty quantification analysis is also conducted. The impact of discretization errors arising from the pixelation of bubbles is investigated using weighted average percentage relative errors and mean errors under both erosion and dilation conditions. The analysis reveals higher relative uncertainty in contact line density measurements than dry area fraction measurements across all fluids studied. The limitations of the U-Net models in generalizing to other HSV datasets are addressed, emphasizing the need for developing more sophisticated image segmentation models, such as foundation models, that are less sensitive to domain shifts. This is crucial for enabling autonomous experimentation and reducing the reliance on specialized models for each fluid and operating condition. Future research directions are outlined, including the investigation of advanced uncertainty quantification techniques, the development of real-time segmentation and analysis algorithms, the evaluation of uncertainty propagation in heat flux reconstruction, and the extension of the methodology to other multiphase flow phenomena. By addressing these recommendations, the understanding, characterization, and modeling of boiling phenomena can be further enhanced, contributing to the advancement of boiling heat transfer research and the development of improved heat transfer models and correlations. Overall, this thesis presents a comprehensive approach for the automated segmentation and analysis of HSV phase-detection images using U-Net CNNs and uncertainty quantification techniques. The proposed methodology demonstrates significant potential for accurate and reliable characterization of boiling dynamics, paving the way for advanced boiling heat transfer research and the optimization of heat transfer systems in various industrial applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying Iodine-129 Environmental Releases and Surface Water Concentrations at Nuclear Fuel Recycling Facilities</title>
<link href="https://hdl.handle.net/1721.1/155644" rel="alternate"/>
<author>
<name>Whiteaker, Kate</name>
</author>
<id>https://hdl.handle.net/1721.1/155644</id>
<updated>2024-07-11T03:30:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantifying Iodine-129 Environmental Releases and Surface Water Concentrations at Nuclear Fuel Recycling Facilities
Whiteaker, Kate
Iodine-129 (I-129) is one of the largest long-term dose contributors in high-level nuclear waste disposal models, and an important contaminant at sites currently undergoing remediation like Savannah River and Hanford. This is in part due to its environmental mobility, its 15.7 million-year half-life, and its potential for bio-accumulation. However, over 90% of the I-129 present in used nuclear fuel is regularly discharged to the ocean at used fuel recycling facilities in France and the UK. This work first quantifies the releases of I-129 to the environment per gigawatt-year electrical energy production over the entire nuclear fuel cycle with and without used fuel recycling, then synthesizes a database of I-129 surface water concentrations in waters affected by discharges from current and historical recycling facilities. We find that the environmental releases from current recycling facilities are above U.S. I-129 release limits, indicating a need for innovation in I-129 capture and isolation technologies in order to adapt used fuel recycling to the United States. We also find that the concentrations of I-129 in surface waters affected by discharges from recycling facilities do not correlate with the amount of I-129 discharged by the facilities. Persistent concentrations appear to be more dependent on factors including siting, dilution, and whether or not the facility attempted to isolate their liquid wastes. These results are particularly important in view of a current renewed interest in the commercial recycling of used nuclear fuel in the United States.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Assessment of Adopting the ADDER Computer Code for the MIT Research Reactor Fuel Management</title>
<link href="https://hdl.handle.net/1721.1/155643" rel="alternate"/>
<author>
<name>Garanzini, Maurane</name>
</author>
<id>https://hdl.handle.net/1721.1/155643</id>
<updated>2024-07-11T03:01:14Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Feasibility Assessment of Adopting the ADDER Computer Code for the MIT Research Reactor Fuel Management
Garanzini, Maurane
The Massachusetts Institute of Technology Reactor (MITR) is a 6 MW research reactor currently operating with highly enriched uranium (HEU) plate-type fuel. Fuel management calculations for this reactor are performed using MCODE, which allows for the coupling of a neutron transport code and a depletion code. As part of the low-enriched uranium (LEU) fuel conversion program, the Advanced Dimensional Depletion for Engineering of Reactors (ADDER) software is being developed at Argonne National Laboratory to provide a more flexible and performant approach to fuel management. This study evaluates the feasibility of transitioning from MCODE to ADDER for MITR fuel management by carrying out a code-to-code comparison. Analyses for a full MITR cycle (70 days) for a 22-element fresh HEU core and fresh LEU core were completed, and the impact of simplified in-core experiments with various materials was also evaluated. Calculations with mid-cycle restart were performed, in which reactor power was reduced to 100 kW for 7 hours to evaluate Xe-135 poison reactivity effects. The parameters selected for comparison include control blades height, cumulative fission density, integral neutron flux and nuclide inventory (for selected actinides and neutron poisons). The study showed satisfactory agreement between ADDER and MCODE results, with control blades worth differences within the 200 pcm range that corresponds to ± 100 pcm critical search tolerance, and U-235 mass differences remaining below 0.5 g per fuel element at end of cycle. Differences for other result types remain low enough to show the potential of transitioning to ADDER, with higher differences located near control blades when using the predictor-corrector method for depletion since the codes rely on different algorithmic definitions for predictor-corrector as well as different critical blade search schedules. Closer agreement between results is obtained when switching to the predictor method but still indicates some potential differences in power normalization. The two software also present good agreement on control blades height and Xe-135 core inventory results for mid-cycle restart calculations. Further study is recommended to assess depletion factors such as neutron flux normalization and predictor-corrector schemes. Before ADDER is implemented for MITR fuel management, future work is required to evaluate good agreement for equilibrium cores with depleted HEU fuel element compositions, and analyze fuel elements shuffling in between cycles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Photoelectric Fusion Technologies: Market Potential and Strategic Insights from NTT's IOWN Case</title>
<link href="https://hdl.handle.net/1721.1/155642" rel="alternate"/>
<author>
<name>Numa, Kentaro</name>
</author>
<id>https://hdl.handle.net/1721.1/155642</id>
<updated>2024-07-11T03:01:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessing Photoelectric Fusion Technologies: Market Potential and Strategic Insights from NTT's IOWN Case
Numa, Kentaro
This thesis investigates the rapid evolution of technology in response to surging internet traffic, projected to increase from 33 zettabytes in 2018 to 175 zettabytes by 2025, and data processing demands, anticipated to exceed 180 zettabytes by the same year. The escalating requirements for more robust communication and data processing systems are emphasized, especially as AI advances necessitate substantial computational resources, increasing energy consumption. This highlights the challenges of enhancing performance while maintaining energy efficiency, suggesting the limits of Moore's Law and Dennard scaling. The thesis explores the adoption of silicon photonics as a significant innovation, shifting from electrical to optical signals, particularly within Nippon Telegraph and Telephone Public Corporation (NTT)'s Innovative Optical and Wireless Network (IOWN) initiative. It analyzes the strategic, operational, and market engagement approaches of NTT, focusing on competitive threats, potential collaborations, and strategies to foster third-party development. The conclusion underscores NTT’s potential to transform telecommunications and data processing through photonics and AI technologies.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating Network Scalability of Metaverse-Applicable Use Cases</title>
<link href="https://hdl.handle.net/1721.1/155641" rel="alternate"/>
<author>
<name>Reveron, Daniel E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155641</id>
<updated>2024-07-11T03:29:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluating Network Scalability of Metaverse-Applicable Use Cases
Reveron, Daniel E.
Within the context of scalability for the Metaverse, the network remains a principal limiting factor even if extended reality adoption were to increase, given large volumes of data needed to support complex use cases. This thesis introduces a systems framework to evaluate the scalability of network architecture within the Metaverse, envisioned as the next generation of 3D-enabled internet. Through two experiments, we developed a model to determine whether various Metaverse use cases could be supported by current network infrastructure. The first experiment utilized Meta's Horizon Worlds platform to assess the throughput scalability of objects. The second experiment constructed a model categorizing use cases and evaluated their expected throughput against current data rates, incorporating data from the first experiment and existing literature. The findings indicate that static objects do not contribute persistent throughput, while moving objects exhibit an approximate throughput of 25 Kbps each. Furthermore, education, entertainment, facility design, product design, and training are identified as use case categories most constrained by current infrastructure capabilities.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective Messaging for Tackling NIMBY to Accelerate Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/155640" rel="alternate"/>
<author>
<name>Singh, Amandeep</name>
</author>
<id>https://hdl.handle.net/1721.1/155640</id>
<updated>2024-07-11T03:01:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effective Messaging for Tackling NIMBY to Accelerate Decarbonization
Singh, Amandeep
This study aims to evaluate the efficacy of communication strategies in changing the public perceptions of energy and other facilities that are typically perceived as environmental and health risks. We use the existing US-wide survey data in which the participants were asked about the risks of living near nuclear power plants and which messages were most effective. Our analysis reveals that (1) providing key facts and well- designed messages can change the risk perception, and (2) the messages focused on reassurance (e.g., controlling, containing, monitoring radiations) are more effective than the ones comparing different risks across all the demographic groups. The effectiveness is impacted by demography such that certain demographic groups—Gen Z, the Silent Generation, those with vocational education, and Independents—are more amenable to change, and that the messages focused on social benefits are also effective for the higher education groups and the Democrats. The methodologies offer a framework to improve public perceptions and to create the pathways for the public acceptance of these facilities through identifying key facts and designing messages.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CEνNS in Natural Zinc Superconductors and its Applications for Nuclear Non-Proliferation</title>
<link href="https://hdl.handle.net/1721.1/155639" rel="alternate"/>
<author>
<name>Ryan, Brianna Noelani</name>
</author>
<id>https://hdl.handle.net/1721.1/155639</id>
<updated>2024-07-11T03:49:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">CEνNS in Natural Zinc Superconductors and its Applications for Nuclear Non-Proliferation
Ryan, Brianna Noelani
The potential of neutrinos for nuclear non-proliferation has been heavily debated due to their abundance and low interaction rates. A newly discovered neutrino detection technique, referred to as Coherent Elastic Neutrino Nucleus Scattering (CEνNS), has the potential to settle this debate due to its abnormally high cross-section. This thesis presents a feasibility study for the use of CEνNS detection using zinc superconductors for nuclear non-proliferation. To address this question, this thesis is broken down into two case studies: identifying the trafficking of Cs-137 and safeguarding nuclear reactors. For each of these case studies, the antineutrino spectrum, CEνNS cross-section, and reaction rates were calculated. Using these resources, multiple statistical analyses were performed assuming a theoretically low recoil threshold, no background noise, and optimal detection parameters. Based on these analyses, I conclude that CEνNS is feasible both for discovering trafficked nuclear materials and for safeguarding nuclear reactors, under ideal detection conditions. This study should help open the door to future, more in-depth studies in less ideal, more realistic detection conditions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pulsed Magnetic Imaging of Broad-Frequency Fields using Nitrogen-Vacancy Centers in Diamond</title>
<link href="https://hdl.handle.net/1721.1/155638" rel="alternate"/>
<author>
<name>Karlson, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/155638</id>
<updated>2024-07-11T04:01:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Pulsed Magnetic Imaging of Broad-Frequency Fields using Nitrogen-Vacancy Centers in Diamond
Karlson, Samuel
Wide-field magnetic imaging using nitrogen-vacancy (NV) centers in diamond can yield high-quality images for various applications, including biology, geology, condensed matter physics, and electronics troubleshooting. These quantum sensors yield widefield-of-view images with micron-scale spatial resolution and operate in ambient conditions. Most of the sensing work with NV centers in diamond has focused on DC and low frequency AC fields. This thesis demonstrates a wide-field magnetic imager and its capabilities with test structures of varying complexity. We overcome the challenges for measuring MHz frequency magnetic fields with a quantum frequency mixing approach.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrating Futures: Culture Crates Hybrid Digital-Analog Methodology in the Advancement of Cultural Education and Preservation</title>
<link href="https://hdl.handle.net/1721.1/155637" rel="alternate"/>
<author>
<name>Zaza, Nadine Adel</name>
</author>
<id>https://hdl.handle.net/1721.1/155637</id>
<updated>2024-07-11T03:55:59Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Integrating Futures: Culture Crates Hybrid Digital-Analog Methodology in the Advancement of Cultural Education and Preservation
Zaza, Nadine Adel
In an era of rapid globalization, the preservation of intangible cultural heritage through education is both crucial and necessary. This thesis explores culturally responsive learning and the innovative combination of hands-on and digital learning techniques introduced through "Culture Crate," an educational technology venture that integrates digital and analog methods to enhance cultural education in K-12 settings. Given that culturally responsive teaching significantly impacts student engagement and learning, there is a clear need for educational tools that are both culturally relevant and engaging. Employing a human-centered design approach, the research investigates the efficacy of merging digital content with physical artifacts to preserve cultural heritage and address educational gaps, and prototyping of Culture Crate was tested through this approach.&#13;
&#13;
This thesis underscores Culture Crate's potential to foster empathy and intercultural competence among students through immersive learning experiences. Iterative testing and feedback from educators and students, alongside qualitative interviews, prototyping, and pilot studies, inform the refinement of the product. By aligning with UNESCO's Intangible Cultural Heritage Convention and the United Nations Sustainable Development Goals, Culture Crate aims to preserve endangered cultural practices while enhancing educational outcomes. Culture Crate, a cultural ed-tech solution, addresses the need for culturally responsive teaching by offering a hybrid learning model that combines the best of digital and physical educational resources. The research further examines Culture Crate's scalability within the broader educational market, highlighting its role in integrating cultural heritage into modern education. Ultimately, this study underscores the importance of culturally responsive teaching, offering insights into the development and implementation of educational technologies that ensure future generations remain connected to their cultural roots.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of RANS-Based Turbulence Models for the Propagation of Stratified Fronts in Buoyancy-Driven Flow</title>
<link href="https://hdl.handle.net/1721.1/155636" rel="alternate"/>
<author>
<name>Cummings, Calvin James</name>
</author>
<id>https://hdl.handle.net/1721.1/155636</id>
<updated>2024-07-11T03:02:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Evaluation of RANS-Based Turbulence Models for the Propagation of Stratified Fronts in Buoyancy-Driven Flow
Cummings, Calvin James
Computational fluid dynamics (CFD) is a powerful tool in the design of next-generation nuclear reactors. These reactors are designed to be inherently safe, utilizing physical phenomena such as buoyancy and natural convection to cool the core in the event forced circulation fails. However, the widely implemented Reynolds-averaged Navier-Stokes (RANS) approach to turbulence modeling has previously shown limitations in its ability to adequately predict buoyancy-driven f lows, including thermally stratified flow. Inaccuracies in these cases are often attributed to the Reynolds analogy, a simplification of the turbulent heat flux. To evaluate the validity of the Reynolds analogy under thermal stratification, the HiRJet experiment was constructed at the University of Michigan. HiRJet induces stratification and measures the propagation of stratified fronts over time. This work aims to drive conclusions on the validity of RANS-based models and the Reynolds analogy for stratified flows and develop best practices for CFD applications under these conditions. This is achieved by rigorously assessing the performance of several common turbulence models through comparison to high resolution results from HiRJet and direct numerical simulation (DNS). Sources of inaccuracy were identified through evaluation of separate components such as treatment of turbulence production due to buoyancy and modeling of anisotropic turblence. The STRUCT-ϵ model was adopted to evaluate the impact of resolving turbulent structures on predictions. The buoyancy production flux model (BPFM) was implemented to explore the advantages and challenges of more completely modeling the buoyancy production term in RANS-based models. Most importantly, this work shows that RANS-based turbulence models accurately reproduce experimental and DNS results, demonstrating that, despite widespread skepticism, the Reynolds analogy is not the primary source of error in modeling stratified flows.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>“Biomarkers” for Translational Success in Neurodegenerative Diseases: A Comparative Analysis of the Research to Practice Trends in Breast Cancer and ALS to Identify Systematic Indicators of Translational Success</title>
<link href="https://hdl.handle.net/1721.1/155635" rel="alternate"/>
<author>
<name>Shamsie, Maryam A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155635</id>
<updated>2024-07-11T03:07:15Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">“Biomarkers” for Translational Success in Neurodegenerative Diseases: A Comparative Analysis of the Research to Practice Trends in Breast Cancer and ALS to Identify Systematic Indicators of Translational Success
Shamsie, Maryam A.
Translating research findings into practical therapies for neurodegenerative diseases remains a formidable challenge in biomedical research. This challenge arises from the diseases’ inherent heterogeneity and complexity. Interestingly, similar obstacles were historically encountered in breast cancer research, which has since made significant strides in treatment effectiveness. To address this research-practice gap for neurodegenerative diseases, this research proposes identifying key indicators that can be used to estimate and prioritize research efforts, ultimately improving the success rate of translation to clinical practice. A literature review and principles of systems architecture and dependencies were used to qualitatively assess key indicators and their impacts on successful translation. By comparing the history of research and translation between breast cancer and Amyotrophic Lateral Sclerosis (ALS), we identified eight critical indicators for successful translation in heterogeneous diseases. Among these, two stand out: (1) the identification of critical molecular pathways relevant to the disease and (2) the corresponding biomarkers. Discoveries of these two indicators for a particular disease can pave the way for a precision medicine approach, bridging the gap between research and practical applications of therapies for complex illnesses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Blueprinting AI Economics: Cost Assessment Framework for Business Stakeholders to Navigate Key Aspects in Prompt Engineering, Prompt Automation, and Fine-tuning LLMs</title>
<link href="https://hdl.handle.net/1721.1/155634" rel="alternate"/>
<author>
<name>Sulaiman, Azfar</name>
</author>
<id>https://hdl.handle.net/1721.1/155634</id>
<updated>2024-07-11T03:14:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Blueprinting AI Economics: Cost Assessment Framework for Business Stakeholders to Navigate Key Aspects in Prompt Engineering, Prompt Automation, and Fine-tuning LLMs
Sulaiman, Azfar
The rapid proliferation of large language models (LLMs) has led to an intense focus on achieving unprecedented performance benchmarks, often at the expense of considering the substantial computational costs involved. This oversight is compounded by the lack of robust, academically grounded frameworks for comprehensively evaluating these costs, their sources, and strategies for minimization while balancing performance imperatives. To address this critical gap, my research aims to develop a rigorous and systematic framework that enables researchers and industry stakeholders to understand and contextualize the cost implications of fine-tuning, prompt engineering, and prompt automation techniques. By offering a systematic approach to evaluating the trade-offs between performance, cost, and societal impact, this research seeks to advance the practical and sustainable adoption of LLMs across diverse applications.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative Modeling of Water Demand to Support a Continuous Human Presence on Mars</title>
<link href="https://hdl.handle.net/1721.1/155633" rel="alternate"/>
<author>
<name>Charoenboonvivat, Yana</name>
</author>
<id>https://hdl.handle.net/1721.1/155633</id>
<updated>2024-07-11T03:05:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantitative Modeling of Water Demand to Support a Continuous Human Presence on Mars
Charoenboonvivat, Yana
Establishing a continuous human presence on Mars is a crucial milestone in advancing human capabilities in space and is a high priority for the National Aeronautics and Space Administration. An important step toward establishing a continuous human presence on Mars is identifying landing sites suitable for human and scientific exploration. The quantity of water needed to sustain human life on Mars is a key driver in the selection of landing sites. However, minimal work beyond first-order water demand estimates has been completed to date. To address this gap, this thesis quantitatively estimates how much water is needed to sustain a continuous human presence on Mars. Updates were made to a tool called HabNet, a MATLABsimulation tool that incorporates key mission parameters and outputs predictions of resource levels over time, to improve the accuracy and fidelity of water demand estimates. These updates involve creating additional Environmental Control and Life Support (ECLS) technologies and updating the crew model to reflect more recent data. The updated HabNet tool was then used to simulate five discrete cases that collectively represent a Mars surface campaign crew profile that shows increasing and continuous human presence. Results from deterministic modeling of water demand showed that the net total water demand for 4, 8, 12, 16, and 20 crew members on a 790-day mission were 38,669 kg, 76,545 kg, 118,069 kg, 151,617 kg, and 193,134 kg, respectively. For each crew size, 63-65 % of water was needed for generating MAV propellant, 22-23 % of the water was needed for crops, and 12-15 % was needed for life support. Additionally, the water demand per crew member per day was found to fluctuate between 12.00 kg to 12.50 kg across the five cases. This thesis also demonstrated the ability to perform probabilistic modeling of water demand with HabNet using high-performance computing (HPC). A Monte Carlo simulation was completed using the MIT SuperCloud supercomputer for the same five discrete cases, which marked the first time HPC was used to produce HabNet simulation results. Gaussian and beta distributions were fitted to the water demand results from the Monte Carlo simulation. However, further work is still needed to determine which probability distribution best represents the data. Opportunities for future work include improving the accuracy and fidelity of HabNet to make resource demand estimates and leveraging HPC for future analyses that may be computationally intensive.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Conception to Connection: A Systematic Approach to Integrating Remote Patient Monitoring in Fertility Management</title>
<link href="https://hdl.handle.net/1721.1/155632" rel="alternate"/>
<author>
<name>Thatcher, Florence Fernandez</name>
</author>
<id>https://hdl.handle.net/1721.1/155632</id>
<updated>2024-07-11T03:57:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">From Conception to Connection: A Systematic Approach to Integrating Remote Patient Monitoring in Fertility Management
Thatcher, Florence Fernandez
The fertility journey, spanning preconception to postpartum, is critically underserved by traditional healthcare systems, which often fail to provide continuous, personalized care. This deficiency is particularly acute for individuals facing infertility, who must navigate a labyrinth of physiological and emotional challenges at each stage. The need for timely interventions and access to sustained, individualized care is central to addressing these issues.&#13;
&#13;
Amid these challenges, remote patient monitoring (RPM) systems are emerging as a transformative approach in healthcare, facilitating continuous patient care and monitoring beyond the conventional settings of clinics and hospitals. Despite the increased adoption of RPM and telemedicine, a gap persists in integrating such systems within the domain of fertility care.&#13;
&#13;
This thesis undertakes a comprehensive and systemic evaluation of the fertility landscape, examining barriers to effective treatments and outcomes and identifying key health metrics for each phase of the journey. Moreover, the work analyzes existing devices and technologies to determine their ability to measure these metrics and their technological readiness for remote monitoring. The work includes a review of RPM frameworks using system architecture methodologies, analyzing their architectures, technologies, and ecosystems to adapt them for fertility applications. Although numerous devices for remote testing are now available, their full potential in fertility care still needs to be explored, necessitating further development, clinical validation, and resolution of interoperability issues.&#13;
&#13;
A patient-centered, customizable fertility-RPM framework is proposed, integrating the health metrics with essential architectural decisions aligned with stakeholder needs. This thesis offers foundational insights and operational guidelines for fertility institutions considering adopting RPM services, advocating for a holistic, connected, and continuous care model throughout the fertility journey. This work underscores the transformative potential of RPM in enhancing fertility care, paving the way for more integrated and effective fertility treatment solutions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Telco 5G and Non-Terrestrial Networks: An architecture trade analysis to select solutions to support socio-technical needs in Sub-Saharan Countries</title>
<link href="https://hdl.handle.net/1721.1/155631" rel="alternate"/>
<author>
<name>Kotane, Jacky L.</name>
</author>
<id>https://hdl.handle.net/1721.1/155631</id>
<updated>2024-07-11T03:37:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Telco 5G and Non-Terrestrial Networks: An architecture trade analysis to select solutions to support socio-technical needs in Sub-Saharan Countries
Kotane, Jacky L.
Sub-Saharan Africa(SSA) has a unique cellular mobile subscriber base of over 490 million, accounting for a mobile penetration of 43%. Despite the growth in mobile telephony over the past few decades, more than 180 million people in Sub-Saharan Africa live without internet access. For those living in areas with internet connectivity, at least 59% of people cannot afford access to the internet and remain unconnected. The Global System for Mobile Communications(GSMA) estimates that mobile internet in Sub-Saharan Africa will increase by over 160 million by 2030, with fifth-generation (5G) coverage accounting for 17%. At the same time as 5G technology picks up, satellite operators are deploying megaconstellations to provide ubiquitous broadband communications services. The access to the sub 2GHz spectrum to support satellite direct-to-device will likely lead to new business models for Mobile Operators in Sub-Saharan Africa. The African space industry is anticipated to grow to about USD 23 Billion by 2026, with an expected launch of an additional 105 satellites. Fifteen African countries have collectively launched 59 satellites. This thesis applies a systems engineering approach to evaluate a selection of concepts for a 5G-Non-Terrestrial Network (5G-NTN) to address coverage of the currently unconnected people across Sub-Saharan Africa. The architecture tradespace exploration framework evaluates feasible 5G-NTN architectures using cost and multi-attribute utility. The analysis suggests a constellation of at least 45 satellites in Low Earth Orbit to provide integrated access and backhaul for 5G networks. Implementing such an architecture would require collaboration between mobile network operators and space agencies in Sub-Saharan Africa to create a shared satellite constellation infrastructure to address coverage and Space sovereignty needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Firm Dynamics under Industrial Policy</title>
<link href="https://hdl.handle.net/1721.1/155630" rel="alternate"/>
<author>
<name>Monden, Yuichiro</name>
</author>
<id>https://hdl.handle.net/1721.1/155630</id>
<updated>2024-07-11T03:42:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Firm Dynamics under Industrial Policy
Monden, Yuichiro
Designing an effective industrial policy is a critical issue for governments. How does the policy's effect on an industry as a whole vary with the attributes of the supported firms and with the nature of the supported industry? To answer this question, this thesis develops a model describing firm dynamics under government support in the form of tax credits and conducts simulation experiments while varying policy scenarios and parameters representing the industry's nature.&#13;
The results show that the impact of government support on an industry varies greatly depending on a parameter representing one of the nature of the industry: inertia to the past market share. For industries where the inertia is within a certain degree, there is a particular trend in the impact of government support on an industry, a clear trade-off depending on the target of the support: the support to large firms has the effect of increasing the size of the largest firms but reduces competition and widens the gap between firms, while the support to small and medium-sized firms has the effect of increasing competition and narrowing the gap, but reduces the size of the largest firm in the industry. However, in industries where the inertia is greater than a certain level, the effect of such policies disappears. The inertia dominates the growth dynamics of the firms, and the policy becomes unable to change the state of the industry.&#13;
These results highlight the importance of identifying the nature of the industries to be supported when designing industrial policies. They also show that even when targeting the industries that policies can affect, it is difficult to find a single policy scenario that simultaneously improves the state of an industry from all perspectives. Policymakers need to design industrial policies that meet their purposes with an understanding of the benefits and sacrifices that result from different targets of government support.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a new affordable housing approach: A system-thinking set of criteria to assess quality</title>
<link href="https://hdl.handle.net/1721.1/155629" rel="alternate"/>
<author>
<name>Gottdiener Islas, David B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155629</id>
<updated>2024-07-11T03:03:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Towards a new affordable housing approach: A system-thinking set of criteria to assess quality
Gottdiener Islas, David B.
Considering that the built environment footprint is expected to double by the second half of this century, mainly driven by growth – both economic and demographic – in developing countries, reconciling several tensions related to this expansion is of paramount importance.  Certainly, accommodating growth without sacrificing sustainability – considering prevalent manufacturing processes that enable the construction sector yield a substantial portion of global GHG emissions – and providing affordable housing without neglecting quality. Thus, a deceivably simple question arises: what is affordable quality housing? Evidently, the question contains an opportunity – arguably, also an obligation – to employ a system-thinking perspective that observes – and is guided by – the relationships between housing and its broader urban system. So far, pervasive affordable housing development models (typically categorizing inert metrics as economic, social, and sustainable) have proven insufficient in several developing countries for their disregard to a system-thinking approach.  The goal of this work is to build a system-thinking approach that will enable a two-way dialogue between further research that better equip housing development stakeholders with the necessary set of criteria to think and act having in mind the expected functions that housing shall provide – enabling performance comparisons between multiple design concepts until desirable results are achieved by iterative improvements – and the empirical observations that reflect the dynamic nature of both housing needs and the methods to analyze and fulfill them.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring the product configuration complexity and cost for mass-customization of automobiles: A qualitative and quantitative study of the product variant complexity, its associated cost</title>
<link href="https://hdl.handle.net/1721.1/155626" rel="alternate"/>
<author>
<name>Vidhate, Chetan</name>
</author>
<id>https://hdl.handle.net/1721.1/155626</id>
<updated>2024-07-11T03:30:31Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Measuring the product configuration complexity and cost for mass-customization of automobiles: A qualitative and quantitative study of the product variant complexity, its associated cost
Vidhate, Chetan
This thesis presents an integrated model for analyzing product configuration complexity and cost, aiming to provide a comprehensive framework for decision-making in product configuration management. The research begins with a literature review to identify relevant complexity metrics, narrowing down to two primary metrics: structural and organizational complexity. The selected metrics are integrated into a hybrid model that conceptualizes product configuration complexity as a function of these factors. The model incorporates mathematical formulations for assessing structural and organizational complexities, allowing for a nuanced understanding of the challenges inherent in product configuration. Furthermore, a cost model is developed to quantify the financial implications of product configuration decisions, considering factors such as transport, assembly, and quality control costs. The model is applied to hypothetical scenarios, demonstrating utility in informing decision-making processes within original equipment manufacturers (OEMs). Future work is proposed to enhance the model by incorporating risk and uncertainties, conducting cost-benefit analyses, and refining the algorithm for optimal performance. Overall, this thesis contributes to the advancement of product configuration management practices by providing a comprehensive framework for analyzing complexity and cost in product configuration.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing the Enterprise Architecture of an Innovative Plant Engineering Company</title>
<link href="https://hdl.handle.net/1721.1/155625" rel="alternate"/>
<author>
<name>Watanabe, Yutaro</name>
</author>
<id>https://hdl.handle.net/1721.1/155625</id>
<updated>2024-07-11T03:39:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Designing the Enterprise Architecture of an Innovative Plant Engineering Company
Watanabe, Yutaro
Japan faces significant challenges such as a declining workforce due to aging demographics and the need to decarbonize its pivotal industrial sector. Plant engineering companies, which form the core of manufacturing and energy-related businesses, are expected to contribute to addressing these challenges. This thesis analyzes and proposes solutions to overcome the innovation dilemmas faced by major Japanese companies, including plant engineering firms.&#13;
&#13;
Specifically, an ARIES analysis was conducted to design new approaches to foster innovation. The results suggested that traditional business management practices might not be suitable for new venture development, prompting proposals for organizational structures and systems that simultaneously support existing and new business initiatives.&#13;
&#13;
Furthermore, to aid long-term investment decisions, this study developed a unique data-driven method, the Decision-Making Support Model (DMSM). Applying this model to a plant engineering company as a case study confirmed its capability to support data-driven decision-making.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technoeconomic Assessment of a Gossamer, Planar, Gigawatt-Scale Space-Based Solar Power System</title>
<link href="https://hdl.handle.net/1721.1/155624" rel="alternate"/>
<author>
<name>Althawadi, Mohamed Adel</name>
</author>
<id>https://hdl.handle.net/1721.1/155624</id>
<updated>2024-07-11T03:22:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Technoeconomic Assessment of a Gossamer, Planar, Gigawatt-Scale Space-Based Solar Power System
Althawadi, Mohamed Adel
This thesis aims to analyze space-based solar power (SBSP) from both technical and financial standpoints. While the analysis is mainly purposed to validate the findings of existing literature on SBSP, it also seeks to identify the problems that need to be addressed for SBSP to become technically and financially viable.  The technical analysis has been performed using the systemstheory method—a sequential process that involves stakeholder analysis, requirements derivation, preliminary concept generation, system decomposition, metrics formulation, architectural decision-making, and tradespace analysis. As for the financial feasibility, the assessment has been based on two metrics: the net present value (NPV) and the levelized cost of electricity (LCOE).  Although SBSP is deemed practical from an engineering perspective, the study concludes that it has yet to make financial sense. Its technical practicality is evidenced by the fact that all the components of a SBSP system are demonstrably operable. This thesis proposes a design that is expected to have a specific power of 91.5 W/kg, comparable to the specific power estimated for Caltech’s SSPP concept (98 W/kg). In contrast, NASA’s SPS-ALPHA concept has reportedly been designed to achieve a specific power of 57 W/kg. The financial infeasibility has been proven by the negative NPV and the exorbitant LCOE for all the scenarios considered. The validity of the NPV and LCOE calculated is bound by the accuracy of the estimated costs, especially the cost of satellites, which, contrary to prevailing studies, constitutes most of the system’s cost. For the NPV and LCOE to merit further consideration of SBSP, this thesis recommends boosting the efficiency-to-areal-density ratio of the PV array. This can be achieved by optimizing reflectors that are light enough to improve the efficiency-to-areal-density ratio of the planar structure yet rigid enough to resist deformation. This thesis also recommends improving the efficiency-to-areal-density ratio of the RF signal generator and transmission antenna by leveraging miniaturization techniques to unify the two components into a compact, cohesive module. Alternatively, a new set of materials should be explored for all three components through ongoing research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System interfaces to facilitate follow-up pharmaceutical care in the United States</title>
<link href="https://hdl.handle.net/1721.1/155622" rel="alternate"/>
<author>
<name>Faruque, Fahim</name>
</author>
<id>https://hdl.handle.net/1721.1/155622</id>
<updated>2024-07-11T03:38:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System interfaces to facilitate follow-up pharmaceutical care in the United States
Faruque, Fahim
System interfaces are crucial in complex engineered systems but are understudied in follow-up pharmaceutical care. Therefore, this study aimed to develop a working definition of the concept "system interface" within the context of follow-up pharmaceutical care in the US Healthcare System. To achieve this, semi-structured interviews were conducted with various healthcare system stakeholders. The transcripts of these interviews were analyzed to identify the needs expressed by each interviewee, which were then aggregated at the stakeholder level. Overlapping needs were identified to determine which stakeholders needed to interact for a system aiming to fulfill that need. The results revealed that enhancing healthcare operations, enhancing patient engagement, and educating patients required the highest number of interactions, with 98, 95, and 91 interactions across 18, 17, and 18 stakeholders, respectively. In total, the needs overlap analysis yielded 26 additional functions that may be a component of a follow-up pharmaceutical care system to meet multi-stakeholder needs. These findings suggest that system interfaces are presently an ambiguous component of the system design in follow-up pharmaceutical care despite contributing significantly to the system's complexity.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing User Data Privacy and Trust through the Implementation of the OTrace Protocol: Development, Challenges, and Impact Assessment</title>
<link href="https://hdl.handle.net/1721.1/155621" rel="alternate"/>
<author>
<name>Wen, Dian</name>
</author>
<id>https://hdl.handle.net/1721.1/155621</id>
<updated>2024-07-11T03:14:51Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Enhancing User Data Privacy and Trust through the Implementation of the OTrace Protocol: Development, Challenges, and Impact Assessment
Wen, Dian
In the digital age, safeguarding user data privacy and rebuilding trust in digital platforms have become critical priorities as data breaches and misuse continue to rise. This thesis ex- plores the novel traceability and accountability protocol - OTrace, which tackles these issues by providing users with enhanced control and transparency over their personal data through advanced consent mechanisms and data traceability features. The study thoroughly analyzes current data privacy frameworks, identifying areas where the OTrace Protocol can bridge gaps. Following comprehensive software and product development methodologies, the the- sis details a web service’s technical design and development that implements the OTrace protocol, including requirements analysis, use case definition, system architecture, and Ap- plication Programming Interface (API) specification. The research addresses the technical, regulatory compliance, and user acceptance challenges encountered, offering insights into potential solutions and strategies. The thesis concludes by examining the outcomes of the enhancements made to user privacy and trust perceptions, addresses the study’s limitations and potential areas for future research, and the protocol’s promise for policymakers, de- velopers, and businesses in establishing a more secure and transparent digital landscape, ultimately strengthening user data privacy and fostering greater trust.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Conceptual Design of a Nuclear Microreactor Transportation Cask</title>
<link href="https://hdl.handle.net/1721.1/155620" rel="alternate"/>
<author>
<name>Crawford, Carmen Sleight</name>
</author>
<id>https://hdl.handle.net/1721.1/155620</id>
<updated>2024-07-11T03:30:52Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Conceptual Design of a Nuclear Microreactor Transportation Cask
Crawford, Carmen Sleight
Nuclear microreactors are a rising technology with the potential to be fueled at a central facility and transported to the operation site, which would mark the first attempt to transport a fueled commercial reactor in the US. A standard Type B cask may be adapted to transport a fueled microreactor core, passing the normal condition tests and hypothetical accident condition tests, demonstrated using a sample microreactor core design with heat pipes. An adequate shutdown reactivity margin (k_{eff} &lt; 0.95) can be maintained using the control drums and shutdown rods except if the heat pipes are broken open and flooded with water. Shielding the gamma radiation so that the dose rate at a distance 2 m from the outer surface of the cask remains below 0.1 mSv/h requires 57 tons of lead, including 14 cm of radial shielding, 10 cm of solid axial shielding, and 14 cm of axial shielding through which the heat pipes pass. Decay heat can be effectively removed using thermal fins on the outer surface of the cask to maintain a surface temperature below 85C. Lead shielding melts during hypothetical accident thermal tests, which suggests that the lead must be properly protected from puncture or, better, replaced by depleted uranium (which has higher density and melting point) in further work. Redwood impact limiters and stainless steel 316 shims are sufficient to keep the vessel and heat pipes intact under the condition that the clearance holes through which the heat pipes pass through the lead shield have at least 0.21 mm of clearance. The normal and hypothetical accident conditions thermal tests and the hypothetical accident condition free drop and radiation tests were feasible with this standard Type B cask design.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study on Leveraging Generative Artificial Intelligence and Text Clustering to Support Vendors</title>
<link href="https://hdl.handle.net/1721.1/155619" rel="alternate"/>
<author>
<name>Hubbard, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/155619</id>
<updated>2024-07-11T03:11:27Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Study on Leveraging Generative Artificial Intelligence and Text Clustering to Support Vendors
Hubbard, Steven
This research is an initiative to discover how generative artificial intelligence (AI) tools can improve Amazon Last Mile’s feedback systems to enhance the Delivery Service Partner experience. Our specific focus is on the effectiveness of clustering algorithms&#13;
like DBSCAN and K-Means for grouping text feedback based on semantic similarity and on the employment of retrieval augmented generation (RAG) for extracting actionable insights. Our findings indicate a relative effectiveness of K-Means over DBSCAN in clustering feedback, but the overall effectiveness is moderate, which&#13;
necessitates the need for human verification to counter potential model hallucinations. Additionally the use of RAG with Claude 2.1 demonstrated promise in answering domain-specific questions in spite of limitations related to text-only input. &#13;
&#13;
We propose future emphasis on the integration of AI in current listening mechanisms to offer concise, actionable recommendations for program leaders. This research also recommends continued exploration in embedding models and RAG framework to enhance feedback quality and information retrieval. The potential to integrate generative AI tools within Amazon Last Mile represents an underexplored opportunity for significant enhancements in efficiency, accuracy, and overall partnership satisfaction.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective Assignment of Construction Managers to Construction Sites</title>
<link href="https://hdl.handle.net/1721.1/155618" rel="alternate"/>
<author>
<name>Suzuki, Kensuke</name>
</author>
<id>https://hdl.handle.net/1721.1/155618</id>
<updated>2024-07-11T03:02:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Effective Assignment of Construction Managers to Construction Sites
Suzuki, Kensuke
In the construction industry, construction management is a very important job, and many people work as construction managers at construction sites every day. Construction management consists of various kinds of work. However, the work of construction management has not been clarified in a well-organized way, and construction managers have difficulty in teaching and learning skills related to construction management effectively. There are several reasons for this. One of them is the characteristic of the work of construction itself. In other words, construction of a building is only one experience that cannot be experienced by another building, which is different from other products. Another reason is the assignment of construction managers. When a project ends, the member of the project is assigned to another project. This is the matter of timing, and the assignment is not always the best. Another reason is that there is a lot of tacit knowledge related to construction. It is difficult to learn construction management with lecture-style education. In addition, it takes much time for a young construction manager to be a skillful construction manager. In this way, construction management is hard to deal with in an organized way. Therefore, we tried making a model of construction management that is helpful to understand the structure of construction management. In this research, we made the model with two steps. First, we made the first model of construction management. At this stage, the model is like a list of factors of construction management, which does not clarify the 2structure of construction management. Then, we conducted a survey that consisted of several questions related to construction management and its training. The purpose of this survey is to grasp the characteristics and essence of construction management and its training and organize the first version of the model of construction management. Based on the survey, we improved the model of construction management. This second version demonstrates what is essential in construction management and what can be trained. This second version of the construction management model is helpful with the matter of the training of construction management and the assignment of construction managers. We elicited several insights from the model of construction management created based on the results of the survey.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Overcoming Challenges in Cellular Therapies: A Systems Engineering Approach for Equitable Access</title>
<link href="https://hdl.handle.net/1721.1/155617" rel="alternate"/>
<author>
<name>Latouche, Eduardo Luis</name>
</author>
<id>https://hdl.handle.net/1721.1/155617</id>
<updated>2024-07-11T03:12:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Overcoming Challenges in Cellular Therapies: A Systems Engineering Approach for Equitable Access
Latouche, Eduardo Luis
Cellular and gene therapies have ushered in a new era of medical treatment, promising cures previously thought unattainable. Technologies like CRISPR/Cas9 enable precise genome manipulation, yet challenges persist in therapy delivery, prompting the rise of ex vivo approaches. Despite the promise of adaptive cell therapies, high development costs, manufacturing complexities, and regulatory hurdles hinder widespread adoption. The lack of agreement in the field with respect to centralized versus decentralized manufacturing models and the choice between autologous and allogeneic cell sources pose additional challenges. Equally as critical for global access to these therapies, personnel shortages and specialized expertise requirements must be addressed. A systems engineering approach offers a framework for overcoming these barriers, facilitating comprehensive bioprocess design analysis. Ultimately, developing a descriptive model for analyzing therapeutic delivery is crucial for ensuring equitable access to these transformative therapies worldwide.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Impact of Thermostat Automation and Retail Rate Designs on Cooling and Heating Flexibility: Balancing Consumer Preferences and an Efficient Grid</title>
<link href="https://hdl.handle.net/1721.1/155612" rel="alternate"/>
<author>
<name>Schmitz, Zack</name>
</author>
<id>https://hdl.handle.net/1721.1/155612</id>
<updated>2024-07-11T04:02:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">The Impact of Thermostat Automation and Retail Rate Designs on Cooling and Heating Flexibility: Balancing Consumer Preferences and an Efficient Grid
Schmitz, Zack
Flexibility in household energy consumption is crucial for improving grid efficiency and reducing peak electricity demand. The ongoing impact of climate change and the move towards electrification worsen these challenges, emphasizing the need for effective peak demand reduction strategies. Current approaches often involve peak pricing retail tariffs, behavioral responses to grid operator notifications, or expensive technologies such as demand-side batteries. However, these methodologies rely on unpredictable consumer participation or substantial capital investments. On the other hand, the growing use of smart thermostats presents an opportunity for passive, efficient control of household energy consumption. Combining smart thermostats with appropriate price signals creates an opportunity to optimize the balance between energy cost and thermal comfort. This work examines the role of smart thermostat automation and dynamic retail rate designs in maximizing heating and cooling flexibility while ensuring consumer comfort. The research introduces a new approach to demand-side management by using reinforcement learning (RL) to optimize thermostat settings based on individual thermal preferences and price signals. A comprehensive testbed simulation framework was developed to analyze these effects, incorporating bottom-up energy modeling, individualized thermal comfort profiles using smart thermostat data, and advanced thermostat controls to investigate the impacts of various rate designs on residential energy demand. The study evaluates these impacts at a population level, considering the effects on over 80 household archetypes across a localized region. Key findings show that partitioned time-of-use rates with moderate pricing shifts effectively reduce energy usage without creating new peaks, unlike more aggressive pricing strategies that can lead to pre-cooling-induced new peaks. These insights offer valuable guidance for policymakers and utility operators in designing rate frameworks that decrease overall electricity consumption and peak demand without compromising personal comfort.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamics of Corporate Entrepreneurship in Technology Companies: A Study of Strategic Practices and Governing Frameworks Shaping Entrepreneurial Ecosystems</title>
<link href="https://hdl.handle.net/1721.1/155611" rel="alternate"/>
<author>
<name>Addanki, Sowmya</name>
</author>
<id>https://hdl.handle.net/1721.1/155611</id>
<updated>2024-07-11T03:32:58Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Dynamics of Corporate Entrepreneurship in Technology Companies: A Study of Strategic Practices and Governing Frameworks Shaping Entrepreneurial Ecosystems
Addanki, Sowmya
Corporate entrepreneurship is a strategic imperative for technology enterprises in a competitive landscape where evolution manifests on an exponential scale. This study examines how leading tech firms like Amazon, Google, and Microsoft foster innovation through diverse entrepreneurial initiatives, balancing autonomy with strategic alignment. The research employs a qualitative approach, using expert interviews, case studies, and literature analysis to explore internal and external innovation strategies. Findings highlight the importance of aligning entrepreneurial endeavors with long-term goals and fostering a culture that encourages risk-taking and adaptability. The Strategic Entrepreneurship Framework (SEF) is proposed to analyze the diverse approaches to innovation, revealing distinct strategies in acquisitions, venture capital investments, and internal incubators adopted by these established tech firms. Amazon emphasizes employee empowerment and strategic acquisitions, Google focuses on "moonshot" projects and external partnerships, while Microsoft prioritizes internal hackathons and cultural transformation. This study provides a comprehensive understanding of corporate entrepreneurship in the tech sector, and serves as a valuable resource for understanding how leading tech companies drive innovation, with interesting implications for future research. Further investigation could explore the impact of emerging technologies on these strategies, their scalability, and long-term sustainability amidst global shifts.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Dynamics modelling of Organizational Culture Transformation: &#13;
A study of the organizational and technical factors that affect the implementation of Toyota production system in organizations</title>
<link href="https://hdl.handle.net/1721.1/155610" rel="alternate"/>
<author>
<name>Sreekumar, Anup</name>
</author>
<id>https://hdl.handle.net/1721.1/155610</id>
<updated>2024-07-11T03:26:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System Dynamics modelling of Organizational Culture Transformation: &#13;
A study of the organizational and technical factors that affect the implementation of Toyota production system in organizations
Sreekumar, Anup
Organizational culture is a vital source of competitive advantage. Nevertheless, it is often overlooked or given a lower priority due to its complex nature and the effort required to drive change. Existing change methodologies offer frameworks for implementing and sustaining organizational change, but success rates are low, with only one in three endeavours yielding favourable results. This research aims to adopt systems thinking approach to culture change. It utilizes system dynamics modelling to unravel the dynamics of change.&#13;
Initially, a hybrid change methodology is developed, incorporating the best aspects of models from literature and insights gained from organizational experiences in change efforts, including standard failure modes. This hybrid method serves as the foundation for building the system dynamics (qualitative) model. The developed model represents a shift from conventional linear methods to a circular system-based approach to change efforts. The qualitative (casual loop diagram) system dynamics model facilitates a transformative understanding of the interconnectedness and temporal dynamics. Leaders can gain insights into the complexity of these interconnected relationships and time dynamics.&#13;
Further research involves validating and updating the model through experimentation within a company, where key variables in the model can be measured easily. These measurements and the qualitative model together can be leveraged to produce an action plan to improve the variable. As we follow our action plan and track these variables repeatedly, the change implementation can be sustained.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technological Development Trajectories of the Component Technologies in Battery Electric Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155609" rel="alternate"/>
<author>
<name>Iijima, Rei</name>
</author>
<id>https://hdl.handle.net/1721.1/155609</id>
<updated>2024-07-11T03:47:44Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Technological Development Trajectories of the Component Technologies in Battery Electric Vehicles
Iijima, Rei
As the concern about climate change grows, interest in battery electric vehicles (BEVs) is rising. BEVs are forecasted to constitute about 40% of passenger vehicle sales in 2035. While BEVs produce no emissions from the tailpipe, they face challenges, such as driving range and refueling time, that require technological advancements to improve performance and social acceptance. Since the evolution and replacement of component technologies have propelled the BEV progress, mapping their development trajectories may yield insights into future evolutions.&#13;
&#13;
This thesis explores the technological development trajectories of batteries, ultra capacitors, battery management systems, electric motors, power electronics, and heat pumps, using main path analysis with U.S. patents published up to 2023. This analysis method can detect technological development trajectories and key patents in the trajectories by identifying the patents frequently cited, taking advantage of enormous patent data.&#13;
&#13;
The results reveal that critical innovations do not necessarily occur when many innovations occur. Regarding some technological categories in some technology fields, such as battery circuit arrangements of power electronics, important innovations have been made constantly, and the trends are suggested to continue. On the other hand, other technical categories, such as magnetic circuits of electric motors, are critically innovated recently and intensively along with the increase in attention due to their high potential to improve performance. In addition, obtaining U.S. patents for core technologies, including batteries, battery management systems, electric motors, power electronics, and heat pumps, is crucial to gaining U.S. BEV market share, though it is not the case to succeed in the global market. Furthermore, their patents are not necessarily critical innovations in the technological development of the field. Current trends illustrate that significant BEV innovations are distributed across various entities. This suggests that though patents in the automotive industry have been typically held on verticals, diverse supply chain strategies, including incorporating innovative startups into their own companies or entering into horizontal partnerships with companies that have emerging technologies, are gaining importance in staying competitive in a market where leadership in each technology can swiftly change.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study and analysis of the evolution of knee arthroplasty&#13;
surgery through its technological innovation</title>
<link href="https://hdl.handle.net/1721.1/155608" rel="alternate"/>
<author>
<name>Momenzadeh, Mariam</name>
</author>
<id>https://hdl.handle.net/1721.1/155608</id>
<updated>2024-07-11T03:13:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Study and analysis of the evolution of knee arthroplasty&#13;
surgery through its technological innovation
Momenzadeh, Mariam
Total Knee Arthroplasty (TKA) offers life-changing improvements for many patients; however,  a considerable portion of 10-15% continue to experience dissatisfaction after the surgery. Given the rise in the aging population, increased insurance eligibility for TKA in patients with milder symptoms, and growing interest in robotic surgery, it is important to identify technology gaps that can improve overall patient outcomes. This analysis aims to map the network of processes and stakeholders involved in the TKA journey, from pre-operative planning to post-operative rehabilitation. It will examine existing technologies employed&#13;
across stages of TKA, understanding their functionalities, evaluating their limitations, and assessing their impact on patient outcomes while identifying areas where investment in technology and innovation is most critical.&#13;
Through this investigation, the thesis seeks to shed light on the complexities of the TKA ecosystem, pinpointing some of its limitations and opportunities for technological advancement. This work serves as a decision-making guide, potentially empowering innovators to channel their resources toward impactful solutions that elevate both short and long-term patient outcomes following TKA surgery.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Data-Driven Approach to Understanding the Attrition of Women in Software Engineering</title>
<link href="https://hdl.handle.net/1721.1/155607" rel="alternate"/>
<author>
<name>Golison, Madeleine A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155607</id>
<updated>2024-07-11T03:51:16Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Data-Driven Approach to Understanding the Attrition of Women in Software Engineering
Golison, Madeleine A.
Data from large tech companies shows that 15% or fewer software engineers are women. While Tech companies blame the university pipeline, studies from McKinsey and Accenture found that Tech company “bro culture” was influencing the pipeline of women out of Tech. However, in the MIT Women in Software Engineering survey, of the 183 respondents, most women reported planning on staying in Tech when leaving SWE roles. This formed the hypothesis that female software engineers were leaving SWE roles for reasons other than “bro culture.”  Understanding and improving the attrition of women in the software engineering career path is important because the representation of women in the field is already so small, so any attrition is consequential. Overall, many factors were found to have influenced the retention of women in software engineering roles. Notably, culture was not the single most important reason for women leaving the software engineering career path. The primary reason directly stated in the open-ended survey responses was “burnout,” but this was closely followed by reasons such as finding other opportunities outside of Tech, a desire for better work-life balance, and the lack of diversity. While these explicitly stated reasons were easily noted, predictive models (using logistic regression and tree-based methods) were needed to illuminate factors that were not explicitly identified by respondents. The predictive models identified the primary reasons women leave SWE roles by comparing women who planned to remain in the SWE career path and those who did not. The top reasons identified were not enjoying programming, believing that better opportunities existed outside of software engineering, and being co-located with their team. The last reason, team co-location, was identified as being related to various other environmental factors related to imposter syndrome and was likely a proxy for these other factors. Women in the age range of 25 – 44 seemed to be particularly at risk of leaving the career path, and between the general population and the specific 25 – 34 and 35 – 44 age groups, each had different factors that were most important.  Given these results, several recommendations exist for improving attrition for women in the software engineering career path. The key recommendations include improving manager feedback processes, diversity, work-life balance, and opportunities to work on high-visibility initiatives.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Project Management for Research and Development</title>
<link href="https://hdl.handle.net/1721.1/155605" rel="alternate"/>
<author>
<name>Hanenkratt, Aaron C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155605</id>
<updated>2024-07-11T03:46:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Project Management for Research and Development
Hanenkratt, Aaron C.
Pharmaceutical industry research and development efforts are highly uncertain, expensive, and lengthy projects. These drug development projects require great care from their inception to ensure that unsafe or ineffective projects get canceled as soon as possible, as projects canceled later in the drug development process can incur much greater costs, potentially impacting resources for other projects in medium-sized companies or causing small companies to shutter completely. Proper project management guides executing a project and establishing a communication format for decision-makers in upper management. This work is not a definitive decision-making framework for project progression; however, it crafts a project management system around existing small molecule drug development efforts, aiding the decision-making process. A literature review of existing project management systems when writing yielded no definitive project management methodologies for drug development. Instead, it showed a project management maturity gap in the pharmaceutical industry compared to other industries. Project managers must be adaptive, even in projects with little deviation in expected progression, as those deviations can severely impact the overall project if not handled properly. A project management system that can evolve with an R&amp;D project progression can provide some structure to a very uncertain effort. To identify a project management system for small molecule drug development, main activities, and processes are determined and examined to understand the dynamics of a drug development effort. These general activities and processes are verified by industry personnel, as each company and project may differ. A discussion of existing project management systems also took place to determine if any methodologies exist. Suppose a general drug development process is established, and a project management methodology is developed to best align with the project process. In that case, the project manager can develop the requisite knowledge and skills to lead a complex pharmaceutical R&amp;D project. The model can then be applied to other drug development efforts to ensure proper management and project efforts alignment to meet the unique project or company needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Influences on Startup Decision Making: Applying Systems Thinking and Lenses to Investigate the Perspectives of Startup Leaders</title>
<link href="https://hdl.handle.net/1721.1/155604" rel="alternate"/>
<author>
<name>Durrenberger, Marcelle</name>
</author>
<id>https://hdl.handle.net/1721.1/155604</id>
<updated>2024-07-11T03:31:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Understanding Influences on Startup Decision Making: Applying Systems Thinking and Lenses to Investigate the Perspectives of Startup Leaders
Durrenberger, Marcelle
A startup is an organization that operates in extreme conditions of risk and uncertainty but can achieve disruptive and radical innovation by solving problems with novel technologies and solutions. While in these conditions, startup leaders continue to inform themselves and make decisions, relying on their perspectives, knowledge, and the team and tools they are surrounded by. However, with a 50% failure rate within five years (U.S. BLS 2016), something is missing. Rather than try to identify failure modes, this research explores the perspectives of startup leaders, who are the people who make startup decisions that could lead to success or failure.&#13;
This research explores the application of systems thinking to the perspectives of startup leaders within the context of early and growth-stage hardware technology startups across industries in North America. The objectives consist of a literature review on startups, using a qualitative approach to gain insight into how startup leaders perceive their startups, and developing and applying a set of system lenses to assess startups holistically. Data is collected through interviews and processed using deductive and latent thematic approaches, affinity mapping, DSMs, and applying a set of startup system lenses. &#13;
An extensive literature review explores the literature surrounding the startup ecosystem and is leveraged to derive ten startup system lenses to assess startups holistically. These lenses encompass company and ecosystem elements, including finances, market climate, and business development. By conducting and analyzing the interviews with startup leaders, this research discovers their priorities, execution focus areas, reflections, learnings, and perspectives on the proposed lenses. The analysis captures priorities like speed to market and flexibility, as well as execution focus areas of having a high-performance team and focusing on fundraising and cash flow. This research also elaborates on shared experiences and challenges these leaders experienced, including achieving product market fit and strategically picking customers. The final analysis applies the system lenses using affinity mapping and a Design Structure Matrix (DSM) to holistically view how startup leaders view and discuss the system lenses and the interfaces. This holistic view exposes both which lenses and interfaces the startup leaders are discussing and which they are not discussing.&#13;
This research lays a foundation for applying systems thinking principles and systems lenses to improve the navigation of unknowns and uncertainties during decision making. There is the potential to assess the quality and comprehensiveness of the information used and the thinking used for decision making within startups. The future work of this research includes understanding the specific connections at the interfaces of the lenses, evaluating the impact of the type and quality of information, and understanding drivers on decision making.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Delivery Estimate  Accuracy: Understanding and Reducing Virtual-Physical Mismatches and Missorts in Fulfillment Centers</title>
<link href="https://hdl.handle.net/1721.1/155603" rel="alternate"/>
<author>
<name>Yao, Rong (Jenny)</name>
</author>
<id>https://hdl.handle.net/1721.1/155603</id>
<updated>2024-07-11T03:01:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Delivery Estimate  Accuracy: Understanding and Reducing Virtual-Physical Mismatches and Missorts in Fulfillment Centers
Yao, Rong (Jenny)
Delivery Estimate Accuracy (DEA) is the Amazon Operations metric that measures the percentage of items that attempted delivery on or before the Promised Delivery Date (PDD). There are significant costs and customer experience impacts when packages are not delivered on time, resulting in a DEA miss. Specifically, there are two types of DEA misses that are less well-understood than others and make up a large proportion of the overall missesVirtual-Physical Mismatch (VPM) and Missort. This project focuses on understanding and reducing the number of VPM and Missort misses in Fulfillment Centers, with the scope being Amazon’s Traditional Non-Sort Fulfillment Centers in the US.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Business and Technical System Modeling of Rail Projects with Uncertainty Analysis</title>
<link href="https://hdl.handle.net/1721.1/155602" rel="alternate"/>
<author>
<name>Fujii, Yosuke</name>
</author>
<id>https://hdl.handle.net/1721.1/155602</id>
<updated>2024-07-11T03:27:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Integrated Business and Technical System Modeling of Rail Projects with Uncertainty Analysis
Fujii, Yosuke
Overseas technical cooperation and technology transfer projects has many hurdles. An example is the overseas expansion and operation of high-speed railways, which are highly integrated systems. This research uses the Northeast Corridor SCMAGLEV project, a Japanese high speed railway system overseas cooperation project with the United States planned and promoted as a model, to consider what type of hurdles exist and options &amp; decision-making for dealing with them. We decided to proceed with building a model with the aim of proposing useful measures to deal with complex projects. &#13;
Aimed to be a useful management method and decision-making material for projects with such complex characteristics, we built a prototype model that integrates business and technical systems and enables uncertainty analysis.&#13;
Advantage of this model is that it allows us to consider combinations of multiple system decisions and multiple business decisions. Taking advantage, for example, the research led to the following analysis: By looking at the distribution of uncertainty, it became possible to visualize the state of risk sharing due to differences in schemes (e.g., PPP and Non-PPP). In addition, by focusing on items where the expected NPV changes significantly depending on the business decision, it became possible to identify in advance contract forms where it is difficult to set numbers. In addition, we were able to visualize that the impact of long-term borrowing and interest cannot be ignored depending on the business scheme. We found that the prototype model is useful for aiming for overall optimization while considering complex combinations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Neutron Transmutation of Spent Fuel</title>
<link href="https://hdl.handle.net/1721.1/155596" rel="alternate"/>
<author>
<name>Wickert, Charlotte I.</name>
</author>
<id>https://hdl.handle.net/1721.1/155596</id>
<updated>2024-07-11T03:45:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Non-Neutron Transmutation of Spent Fuel
Wickert, Charlotte I.
This thesis is a scoping study to assess the feasibility of utilizing non-neutron transmutation to target Long-Lived Fission Products (LLFPs), which account for 99% of the long-term radiotoxicity of spent nuclear fuel. With half-lives ranging from 100,000 to 10,000,000 years, LLFPs pose a significant obstacle to long-term high-level waste storage. Geologic repositories for nuclear waste must be functional for millions of years. This significant timescale contributes to the many technical and political challenges preventing the U.S. from closing the back end of the nuclear fuel cycle for High-Level Waste (HLW). The need for a geologictime-scale repository could be reduced if the most active isotopes present in HLW could be identified and transmuted. While disposal would still be necessary, a smaller time scale could resolve some of the most significant concerns associated with the current million-year time scale. Several computational methods, TALYS, TASMAN, PHITS, and FISPACT, are utilized to model the complete transport and transmutation process for proton irradiation to explore the potential of converting LLFP isotopes into stable or shorter-lived forms. TALYS is used to generate proton cross-sections for key LLFPs, as there are no differential cross section measurements in the energy range of interest (18-70 MeV). The uncertainty in the transmutation rate is calculated from the perturbed cross sections generated by TASMAN and TALYS in work supporting this thesis. The physics of the proton beam is modeled in supporting work using PHITS to provide a flux-energy spectrum and estimate the number of irradiated particles. Finally, FISPACT calculates the amount of depletion for each LLFP. A comparison of alpha and deuteron irradiation is performed using cross sections from the TENDL2021 library and SRIM to determine the penetration depth for each incident particle. Preliminary findings indicate that longer irradiation times and higher beam energies enhance transmutation, resulting in a decreased long-term abundance of LLFPs compared to natural decay conditions. For commercial proton accelerators with a 10mA current operating continuously, the transmutation rates for LLFPs range from 0.59 +- 0.12 g/year to 7.51 +- 1.19 g/year. Most LLFPs are produced in a 1 GW (thermal) reactor on the order of 1kg/year. Therefore, the transmutation rates achievable with commercial accelerators are too low to make a significant impact. However, increasing the proton beam energy to take advantage of proton spallation reactions may be successful, especially in the case of Selenium79. 660g/year of Selenium-79 are produced in a 1 GW (thermal) reactor. Initial spallation estimates show that for Selenium-79, approximately 24 g/year could be transmuted with a single accelerator. Future work will focus on improving the spallation irradiation scheme and target design. This work was supported by the DOE ARPA-E Project.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Introducing AI as a Team Member During the Fuzzy Front End of New Product Introduction Projects in the Medical Device Industry: An Experimental Design</title>
<link href="https://hdl.handle.net/1721.1/155595" rel="alternate"/>
<author>
<name>Asher, Roy</name>
</author>
<id>https://hdl.handle.net/1721.1/155595</id>
<updated>2024-07-11T03:13:02Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Introducing AI as a Team Member During the Fuzzy Front End of New Product Introduction Projects in the Medical Device Industry: An Experimental Design
Asher, Roy
Artificial Intelligence has been around since the 1950s. More recently, with the introduction of advanced machine learning methods, the markets have seen many complex AI solutions that can interact with humans seemingly naturally. Furthermore, the multiplication rate of these technologies is accelerating, and new promises are being marketed daily in every media outlet. In light of this fast technological expansion, there is a need for additional research to evaluate these types of solutions.&#13;
This study focuses on the medical device industry. Industries that are highly regulated, like the healthcare technology space, are specifically interesting as they must comply with strict requirements imposed due to the industry's risky nature. The study aims to see how introducing AI as an expert research and development team member at the early phase of a new product introduction medical device project affects a medical device R&amp;D team’s capability to implement scope effectively and efficiently. &#13;
The experimental design starts with identifying barriers to team performance from literature and through interviews with seasoned industry leaders in medical devices. A battery of experiments is designed to provide a more complete assessment of the effects of AI as a team member on an R&amp;D team in the medical device industry, as AI expertise in one area can be more impactful than in another. This also ensures an understanding of how our stakeholders' various areas of interest will be affected so that AI can drive value as a team member. The study evaluates and anchors many architectural aspects of experimental design. Clear hypotheses are provided in a format that promotes insightful statistical analysis. A medical device challenge is presented as a game for teams to participate in. Protocols detail preparing for and executing the experiments as well as the post-analysis. &#13;
Through this systematic design, the work identifies and explores the complexity of executing the experiments. Implementing the guidance toward study execution and executing these experiments is a natural continuation of this work. Furthermore, the study design lays a foundation for further research in the sociotechnical integration between AI and humans.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Approach to Low-Cost, Modular Autonomous Surface Vehicle and Autonomous Underwater Vehicle Integration</title>
<link href="https://hdl.handle.net/1721.1/155594" rel="alternate"/>
<author>
<name>Hamel, John M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155594</id>
<updated>2024-07-11T03:02:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Systems Approach to Low-Cost, Modular Autonomous Surface Vehicle and Autonomous Underwater Vehicle Integration
Hamel, John M.
This thesis investigates the utility of docking low-cost Autonomous Underwater Vehicles (AUVs) with low-cost Autonomous Surface Vehicles (ASVs) through the application of the systems process. With the decreasing cost and increasing functionality of consumer electronics, systems integrating commercial-off-the-shelf (COTS) components can produce higher value at economical prices. The question is how impactful this trend is for surface and underwater systems. Specifically, this thesis addresses the interface between ASVs and AUVs and how low-cost versions can complement each other to provide previously unrealized value. This thesis reviews the marine autonomy field, defines a concept of operation, and analyzes the design tradespace based on multi-attribute utility and complexity. Through the process of analyzing the architectural and engineering tradespaces, over 32,000 possible combinations were reduced by 99.9% to identify 30 leading design combinations. The theoretical analysis informed fleet modeling and field testing of a leading design with Massachusetts Institute of Technology (MIT) Engineering Systems Laboratory’s ASV Platform for Expanding AUV exploRation to Longer ranges (PEARL) on the Charles River in 2023. The fleet modeling identified the non-linear relationship between AUV &#13;
operational efficiency and percent utilization of the AUVs when serviced by one ASV. The on-water system test was a product of model-based conceptual analysis, autonomy behavior code development, and rapid prototyping which yielded a successful autonomous dock between PEARL and a dummy AUV. The autonomous docking was successful on the 3rd attempt, resulting in a 33% success rate. Ultimately, the thesis attempts to show that a low-cost framework allows for non-traditional architectures which can produce value through autonomous ASV and AUV docking.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Study of Using Nuclear Batteries in Decentralized Hydrogen Production</title>
<link href="https://hdl.handle.net/1721.1/155592" rel="alternate"/>
<author>
<name>Germonpré, Emile</name>
</author>
<id>https://hdl.handle.net/1721.1/155592</id>
<updated>2024-07-11T03:53:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Feasibility Study of Using Nuclear Batteries in Decentralized Hydrogen Production
Germonpré, Emile
Nuclear batteries (NBs) are a class of factory-fabricated, autonomously operated microreactors that have the potential to form an extremely versatile clean energy platform. However, they have a high levelized cost of electricity (LCOE), so more insights are needed into how to leverage their unique features to make attractive projects. To that goal, this work investigates using NBs in decentralized hydrogen production to better understand their true value proposition and applicability. The work is part of a larger project in which using NBs for offshore power generation is also investigated. Both the hydrogen production and offshore power generation reports are available as CANES publications [1], [2]. &#13;
&#13;
The focus is exclusively on economics, as I do not foresee any technical challenges to this application. By evaluating nearly 100 different projects, I highlight five factors needed for competitiveness; four of which directly impact the cost of hydrogen production, as shown in Figure 1:&#13;
&#13;
1. The facility size to dilute the cost of providing site security &#13;
2. The capital cost decrease over time due to the economies of multiples  &#13;
3. Policy and regulation through clean energy subsidies and the requirement of on-site guards. &#13;
4. The efficient leveraging of NB’s high-temperature heat delivery  &#13;
&#13;
The fifth factor relates to the benefit of colocation of production and demand, as it can save on the large hydrogen delivery costs. The delivery cost savings can make the best-performing semi-centralized NB projects competitive with centralized production in contexts where transmission from the centralized plants is not cheap. On the other hand, the distribution cost saving of on-site production is not decisive according to my calculations. However, hydrogen delivery costs are highly context-dependent. So, further work is needed to address other delivery contexts - e.g., rural communities – and to better understand under which circumstances NBs can provide significant delivery cost saving.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling UO₂ and UN Fuel Fission Gas Release Instances in BISON for Microreactor Applications</title>
<link href="https://hdl.handle.net/1721.1/155591" rel="alternate"/>
<author>
<name>Cunningham, Kaylee</name>
</author>
<id>https://hdl.handle.net/1721.1/155591</id>
<updated>2024-07-11T03:27:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling UO₂ and UN Fuel Fission Gas Release Instances in BISON for Microreactor Applications
Cunningham, Kaylee
Pelletized UO₂ and UN (mononitride) fuel concepts are currently under consideration for microreactor technology. To implement these fuel concepts, the performance of UO₂ and UN under microreactor irradiation conditions must be well understood. One key fuel performance phenomenon is fission gas release, where gaseous fission products are expelled from the fuel pellet to the plenum in the cladding. The fission gas release threshold plot of burnup vs. temperature at 1% release, first introduced by Vitanza and commonly coined the “Vitanza curve,” is of particular interest because it describes when fission gas release begins [1].&#13;
Thus, accurately modeling fission gas release is an active area of research. Though empirical models of UO₂ and UN fission gas release thresholds exist, like Vitanza and Wallenius et&#13;
al., they fail to account for the low linear powers found in microreactor concepts [1, 2]. As a result, BISON, a MOOSE-based (Multiphysics Object Oriented Simulation Environment)&#13;
fuel performance code, was used to evaluate fission gas release in UO₂ and UN fuels at power levels representative of microreactors. The fission gas release threshold curve was&#13;
constructed from BISON results for both fuel types, validated against Vitanza and Wallenius et. al, and then extended to a third dimension to incorporate power dependency and create 3D surface threshold plots [1]. At both light water reactor and microreactor power levels for UO₂ and UN, BISON calculated the threshold curve as the expected exponential decay, within 100 K of the Vitanza and Wallenius curves, respectively. When fuel surface temperature was gradually increased at a constant low power level, the threshold curve decreased. This was as expected since higher temperatures drive faster gas atom diffusion. Faster gas atom diffusion causes fission bubbles to form, interconnect into “tunnels” and dispel fission gases to the plenum more rapidly [3]. Ultimately, this study demonstrates&#13;
that the fission gas release threshold is not only influenced by temperature but also by power level. Low power levels associated with microreactor technology ultimately delay the onset of fission gas release. When combined with low-temperature operation, UN fuel may produce very minimal, if any, fission gas release. This may lead to enhanced reactor safety and potentially design and construction cost reductions.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploration of the future enterprise system architecture of the Japanese-origin high-speed railway in Texas.</title>
<link href="https://hdl.handle.net/1721.1/155589" rel="alternate"/>
<author>
<name>Aoshima, Naofumi</name>
</author>
<id>https://hdl.handle.net/1721.1/155589</id>
<updated>2024-07-11T03:50:19Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Exploration of the future enterprise system architecture of the Japanese-origin high-speed railway in Texas.
Aoshima, Naofumi
High-speed rail (HSR) is renowned for its efficiency and environmental advantages, reducing fuel consumption, generating employment, boosting tourism, and mitigating congestion. The Texas HSR project aims to connect Dallas and Houston using Japanese-origin HSR technology. Despite securing regulatory approvals, it faces significant challenges. This thesis examines the project's characteristics, identifies its challenges, and prioritizes future considerations.&#13;
&#13;
The research begins with an overview of the project, exploring common reasons for the failure of large-scale projects, with a focus on demand estimation and organizational design. It then analyzes future demand for Texas HSR using two data sources: the Longitudinal Employer-Household Dynamics program’s Origin-Destination Employment Statistics (LODES) and the Next Generation National Household Travel Survey (NextGen NHTS). LODES data offers insights into workers’ transportation patterns and potential ridership, supporting current estimates but indicating areas for refinement. NextGen NHTS data aids in more precise travel demand modeling. The thesis recommends integrating multiple updated data sources for robust forecasting.&#13;
&#13;
Applying the ARIES framework, the thesis examines the Texas HSR project's enterprise architecture through landscape mapping, stakeholder analysis, and SWOT analysis. Findings suggest that a collaborative Japanese-U.S. system, sharing critical information and expertise, can leverage strengths and opportunities. However, this requires significant effort and coordination due to limited experience and multiple entities, with workforce uncertainty as a risk. Effective collaboration and talent retention are crucial. To address these issues, a survey is conducted, followed by the envisioned future capturing. Then, the thesis proposes three alternative architectures. Alternative 3, which consolidates key entities for better resource management, is preferred among them. It also explores extreme scenarios and recommends a phased implementation plan to ensure smooth transitions and mitigate resistance to change. The thesis concludes with a summary of findings and a discussion of limitations and future work.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Innovative Engineering Organizations within Large Technology Enterprises Using a Systems Thinking Approach</title>
<link href="https://hdl.handle.net/1721.1/155588" rel="alternate"/>
<author>
<name>Zhou, Bingnan</name>
</author>
<id>https://hdl.handle.net/1721.1/155588</id>
<updated>2024-07-11T03:55:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Architecting Innovative Engineering Organizations within Large Technology Enterprises Using a Systems Thinking Approach
Zhou, Bingnan
The technology industry has experienced significant transformations driven by rapid technological developments, changing market demands, and evolving business models. These changes have led to the creation of new products and services across various segments within the technology industry. With their substantial market value, large enterprises are crucial for driving innovation and boosting the economy. However, they face fierce competition from both established and emerging players, compounded by challenges such as economic uncertainty. Overcoming barriers to innovation is essential. Engineering organizations are the backbone of technology companies, making it vital for large enterprises to design innovative engineering organizations to remain competitive and create real value in the industry.&#13;
The primary objective of this thesis is to investigate key factors, strategies, and approaches that foster an innovative environment to drive organizational innovation. Additionally, it demonstrates how a systems thinking approach can holistically analyze an enterprise and generate crucial considerations for designing future organizational architecture. To achieve this goal, the study begins with a literature review on innovation barriers and generic strategies that might help cultivate an innovative environment. A discussion of approaches drawn from case studies to improve innovative environments is also presented. Based on these strategies and approaches, the study suggests several desired attributes to consider in transforming the organizational architecture for innovation. The study then employs an enterprise architecting framework to holistically analyze an engineering organization within a large technology enterprise. This analysis identifies the emerging stakeholder values the organization may embrace to remain competitive.&#13;
Building on this foundational analysis, the thesis proposes multiple alternative architectures. These architectures are then evaluated to determine their effectiveness, with detailed discussions on important considerations for various potential future scenarios. Finally, the thesis suggests an actionable plan for implementing the new architecture, aiming to create an innovative engineering organization and enhance the enterprise's competitive advantages in the technology industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Absorptive Capacity: Systems Framework for Open Innovation in Japanese Enterprises</title>
<link href="https://hdl.handle.net/1721.1/155587" rel="alternate"/>
<author>
<name>Yukawa, Ayako</name>
</author>
<id>https://hdl.handle.net/1721.1/155587</id>
<updated>2024-07-11T03:47:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Architecting Absorptive Capacity: Systems Framework for Open Innovation in Japanese Enterprises
Yukawa, Ayako
As Japan faces challenges in maintaining its global innovation leadership, this thesis explores the potential of collaborative R&amp;D between large Japanese firms and external actors to drive innovation through open innovation practices. The research focuses on absorptive capacity - defined as an organization's ability to recognize, assimilate, and utilize external knowledge - as a critical factor in successfully implementing outside-in open innovation. To address the gaps between academic research and real-world implementation of open innovation, the thesis develops a systems framework for understanding and designing absorptive capacity in the context of large Japanese firms. Using a systems architecture approach and conducting case studies of five Japanese companies recognized as high-performing innovators, the research identifies four main capabilities constituting absorptive capacity: management, recognition, assimilation, and exploitation. The framework maps these capabilities to specific architectural decisions and options, linking the theoretical understanding of absorptive capacity as a system to practical choices in designing a firm's absorptive capability. The significant influence of management capability on recognition and assimilation capabilities, as well as organizational structure and needs assessment in driving absorptive capacity as architectural decisions, are also revealed. This thesis is expected to contribute to both academic discourse and practical implementation, extending previous perspectives on absorptive capacity and providing actionable guidance for designing and managing open innovation initiatives for large Japanese firms and policymakers. While limitations of this research include the potential lack of comprehensiveness in architectural decisions and the subjectivity in case study selection, this thesis will serve as a foundation for future studies on establishing Japan's competitive innovation ecosystem on a global scale.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of osmotic shock on the release of acid phoshatase activity from streptococcus mutants</title>
<link href="https://hdl.handle.net/1721.1/155574" rel="alternate"/>
<author>
<name>Fleisher, Michael Howard.</name>
</author>
<id>https://hdl.handle.net/1721.1/155574</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">The effect of osmotic shock on the release of acid phoshatase activity from streptococcus mutants
Fleisher, Michael Howard.
The "osmotic shock" treatment of bacterial cells has proven to be an effective procedure for releasing hydrolytic enzymes located in the periplasmic compartment of the cell. It was the object of the present study to examine the mechanism of the "osmotic shock" procedure on the Streptococcus mutans strain PR-89 and the measurement of the acid phosphatase activity of the shocked cells. This enzyme could be a primary etiological agent in dental caries formation, and a simple me~hod of releasing-the enzyme would greatly facilitate the characterization of its properties. During the course of the study several important characteristics of the enzyme were observed. First, the enzyme activity increases linearly with the growth of the bacteria. Secondly, in the presence of inorganic phosphate the enzyme is observedly repressed. Finally, during the period of bacterial growth in minimal media supplemented with various concentrations of phosphate, the nature of the enzyme is constituitive. The "osmotic shock" procedure allowed a limited examination of the properties of the acid phosphatase enzyme produced by the Streptococcus mutans. However, the enzyme activity was not successfully separated from the bacterial cell to prove that it had been released.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nutrition and Food Service, 1973; Cataloged from pdf of print version of thesis. "February, 1974, i.e. Sept. 1973." Vita.; Includes bibliographical references (pages 46-49).
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process and product models for collaborative design</title>
<link href="https://hdl.handle.net/1721.1/155571" rel="alternate"/>
<author>
<name>Gross, Miriam Eva.</name>
</author>
<id>https://hdl.handle.net/1721.1/155571</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Process and product models for collaborative design
Gross, Miriam Eva.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Environmental Engineering, 1992; Includes bibliographical references (leaves 158-162).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effective prototyping during product development</title>
<link href="https://hdl.handle.net/1721.1/155570" rel="alternate"/>
<author>
<name>Griesser, Hans Patrick.</name>
</author>
<id>https://hdl.handle.net/1721.1/155570</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Effective prototyping during product development
Griesser, Hans Patrick.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaves 93-97).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Color image enhancement</title>
<link href="https://hdl.handle.net/1721.1/155568" rel="alternate"/>
<author>
<name>Marshall, Shelley E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155568</id>
<updated>2025-10-30T17:51:30Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">Color image enhancement
Marshall, Shelley E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1984; Includes bibliographical references.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the capital structure of major competitors in the telecommunications industry</title>
<link href="https://hdl.handle.net/1721.1/155567" rel="alternate"/>
<author>
<name>Marshall, Nelson W.</name>
</author>
<id>https://hdl.handle.net/1721.1/155567</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">A study of the capital structure of major competitors in the telecommunications industry
Marshall, Nelson W.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1984; Bibliography: leaves 128-129.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An air-preheating system for blast furnaces</title>
<link href="https://hdl.handle.net/1721.1/155564" rel="alternate"/>
<author>
<name>McPeak, Mark Allan.</name>
</author>
<id>https://hdl.handle.net/1721.1/155564</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1984-01-01T00:00:00Z</published>
<summary type="text">An air-preheating system for blast furnaces
McPeak, Mark Allan.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1984; Bibliography: leaves 177-180.
</summary>
<dc:date>1984-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effect of vitamin A on plasma glycoproteins</title>
<link href="https://hdl.handle.net/1721.1/155562" rel="alternate"/>
<author>
<name>Kiorpes, Timothy Charles.</name>
</author>
<id>https://hdl.handle.net/1721.1/155562</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">The effect of vitamin A on plasma glycoproteins
Kiorpes, Timothy Charles.
The rate of uptake of label from radioactive D-glucosamine and D-mannose into the plasma glycoproteins was studied in vitamin A-deficient rats by comparison with the plasma of normal pair-fed controls. Preliminary studies indicated that peak incorporation (specific activity) was reached three hours after intraperitoneal injection with labelled sugar in both vitamin A-deficient and normal rat plasma. Normal-deficient pairs were injected with the same sugar, labelled with a different isotope, and their plasma was mixed based on equal amounts of protein and fractionated on DEAE-Sephadex A-50. There was a consistent decrease m radioactivity observed m what appeared to be the alpha₁ peak in vitamin A-deficiency. This depression was on the order of 30%, when normal and deficient peak totals were compared. This effect appeared with mannose and glucosamine and was of equal magnitude for both sugars. Fractionation of this peak by gel filtration showed that most of the radioactivity was associated with one glycoprotein, which was homogeneous in 5% polyacrylamide gel electrophoresis; t.he molecular weight of this glycoprotein was estimated to be on the order of 1 x 10⁶ from its behavior on Sepharose 6B. The decrease in the incorporation of label into this peak was interpretted as representing a decreased synthesis rate in vitamin A-deficiency. A shift in the position of the peaks occurred on DEAE-Sephadex in two fractionations of glucosamine-labelled plasma. The vitamin A-deficient plasma glycoproteins were eluted slightly later than those from normal plasma, indicating either a higher negativity in deficiency or a lower molecular weight. This effect was not investigated. However, its failure to be expressed during gel filtration and its reappearance in electrophoresis suggested that charge differences were responsible for this shift.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nutrition and Food Service, 1973; Cataloged from pdf of print version of thesis.; Includes bibliographical references (pages 73-76).
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of several distance measures for segmentation and isolated word recognition</title>
<link href="https://hdl.handle.net/1721.1/155536" rel="alternate"/>
<author>
<name>Brown, Ralph W.</name>
</author>
<id>https://hdl.handle.net/1721.1/155536</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Comparison of several distance measures for segmentation and isolated word recognition
Brown, Ralph W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1982; Includes bibliographical references (leaves 101-103).
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design of lumber producing machinery</title>
<link href="https://hdl.handle.net/1721.1/155534" rel="alternate"/>
<author>
<name>Keller, Robert E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155534</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">The design of lumber producing machinery
Keller, Robert E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The design and development of a rosette extensometer of small gage length</title>
<link href="https://hdl.handle.net/1721.1/155533" rel="alternate"/>
<author>
<name>Bulkeley, Peter Zane.</name>
</author>
<id>https://hdl.handle.net/1721.1/155533</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">The design and development of a rosette extensometer of small gage length
Bulkeley, Peter Zane.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957; Includes bibliographies.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forces in single grit grinding</title>
<link href="https://hdl.handle.net/1721.1/155532" rel="alternate"/>
<author>
<name>Brown, Robert Hallowes.</name>
</author>
<id>https://hdl.handle.net/1721.1/155532</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">Forces in single grit grinding
Brown, Robert Hallowes.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1957; Bibliography: leaves 63-65.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geometrical analysis of grinding</title>
<link href="https://hdl.handle.net/1721.1/155530" rel="alternate"/>
<author>
<name>Kalpakcioglu, Serope,
            1928-</name>
</author>
<id>https://hdl.handle.net/1721.1/155530</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1953-01-01T00:00:00Z</published>
<summary type="text">Geometrical analysis of grinding
Kalpakcioglu, Serope,
            1928-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1953; Bibliography: leaf 55.
</summary>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a new impact tester for Gillette safety razor blades</title>
<link href="https://hdl.handle.net/1721.1/155529" rel="alternate"/>
<author>
<name>Kubick, Harry.</name>
</author>
<id>https://hdl.handle.net/1721.1/155529</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">Design of a new impact tester for Gillette safety razor blades
Kubick, Harry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1951; Bibliography: leaf 41.
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A technique for scheduling patients to specialist consultations in a group practice.</title>
<link href="https://hdl.handle.net/1721.1/155526" rel="alternate"/>
<author>
<name>Lynn, Jeffrey Mark.</name>
</author>
<id>https://hdl.handle.net/1721.1/155526</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A technique for scheduling patients to specialist consultations in a group practice.
Lynn, Jeffrey Mark.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Alternative techniques for modeling travel distance.</title>
<link href="https://hdl.handle.net/1721.1/155524" rel="alternate"/>
<author>
<name>Vaccaro, Henry Sebastian.</name>
</author>
<id>https://hdl.handle.net/1721.1/155524</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Alternative techniques for modeling travel distance.
Vaccaro, Henry Sebastian.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1974; Bibliography: leaves 246-249.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sticks : a new approach to LSI design.</title>
<link href="https://hdl.handle.net/1721.1/155523" rel="alternate"/>
<author>
<name>Williams, John Douglas,
            1944-</name>
</author>
<id>https://hdl.handle.net/1721.1/155523</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Sticks : a new approach to LSI design.
Williams, John Douglas,
            1944-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977; Bibliography : leaves 143-144.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Classical solutions in bag theory.</title>
<link href="https://hdl.handle.net/1721.1/155522" rel="alternate"/>
<author>
<name>Lee, Sylvester.</name>
</author>
<id>https://hdl.handle.net/1721.1/155522</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">Classical solutions in bag theory.
Lee, Sylvester.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1975; Includes bibliographical references.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate Change and Aging: analyzing the disproportionate health and socioeconomic vulnerabilities of older adults in relation to the climate crisis in the U.S.</title>
<link href="https://hdl.handle.net/1721.1/155513" rel="alternate"/>
<author>
<name>McVay, Katelyn R.</name>
</author>
<id>https://hdl.handle.net/1721.1/155513</id>
<updated>2024-07-09T03:50:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Climate Change and Aging: analyzing the disproportionate health and socioeconomic vulnerabilities of older adults in relation to the climate crisis in the U.S.
McVay, Katelyn R.
Climate change has exacerbated the extreme highs and lows of temperature throughout the United States. While climate change-related temperature changes have impacted the entire population, certain demographic groups bear more of the burden than others. In particular, older adults (those aged 65+) may be especially at risk due to their overall increased morbidity and mortality rates. Older adults can escape the outdoor temperatures at home through home energy use. However, older adults living at or below the poverty level may not be able to manage the associated costs of home energy usage. This research builds upon previous work on climate justice by assessing the additive components of poverty, home-living status, and energy costs on the resilience of older adults who reside in their own homes at the national level. This paper aims to identify significant locations in the United States where older adults may be most impacted by temperature extremities and which older populations experience the most energy cost burdens. Through the development of an energy cost and climate risk index, this research hopes to identify which places in the U.S. may be most vulnerable to older Americans’ health and financial stability. Significant findings for both cold waves and heat waves include strong positive relationships between overall extreme temperature risk and annual energy cost burdens, which signify a need to subsidize and assist with energy expenses in particularly vulnerable locations. This research contributes a more precise evaluation of the issue and emphasizes the need to localize and focus on specific populations and their unique risk factors since prior spatial research covers a broad range of populations and vulnerabilities, making data interpretation less specific.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>IoT at Amgen - Evaluating and Piloting Industry 4.0 Technology in Biomanufacturing</title>
<link href="https://hdl.handle.net/1721.1/155512" rel="alternate"/>
<author>
<name>Hosinski, Grant</name>
</author>
<id>https://hdl.handle.net/1721.1/155512</id>
<updated>2024-07-09T03:08:50Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">IoT at Amgen - Evaluating and Piloting Industry 4.0 Technology in Biomanufacturing
Hosinski, Grant
The advent of the oft cited Fourth Industrial Revolution, or Industry 4.0, has capacitated the wide spread use of Internet of Things technologies- namely, networks of wireless sensors and actuators- in industrial manufacturing processes. While Industry 4.0 purports to usher in the next generation of smart factories, traditional manufacturing facilities may also stand to benefit by selectively adopting IoT technology to augment mature manufacturing processes. Amgen, a global leader in the production of life saving biopharmaceuticals, has previously supported IoT-based solutions to provide new capabilities within existing biomanufacturing practices. However, selecting and prioritizing potential IoT investments- especially given the mature wired instrumentation infrastructure of Amgen’s manufacturing facilities- remains a challenge. This thesis examines the adoption of IoT technology at Amgen within two distinct lenses. First, an evaluative framework to aid in Amgen’s decision making process surrounding IoT investments is presented. Next, a small-scale IoT device is designed and implemented. The device hosts an artificial intelligence model which, in real time, detects and alerts personnel to glass break events during Amgen biomanufacturing processes. Both initiatives shed light on Amgen’s technical capacity for integrating IoT technology and Amgen’s willingness to adopt IoT technology in addition to creating value within Amgen’s biomanufacturing operations.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A System Dynamics Approach to Analyzing U.S. Army Officer Talent Retention</title>
<link href="https://hdl.handle.net/1721.1/155511" rel="alternate"/>
<author>
<name>Dulce II, Richie</name>
</author>
<id>https://hdl.handle.net/1721.1/155511</id>
<updated>2024-07-09T03:44:35Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A System Dynamics Approach to Analyzing U.S. Army Officer Talent Retention
Dulce II, Richie
This thesis investigates the retention of U.S. Army officers through a system dynamics approach, aiming to address the complex factors influencing officers' decisions to remain in or leave the service. By conducting a comprehensive literature review and synthesizing data from various secondary source surveys, key variables impacting retention were identified. These variables were integrated into a qualitative system dynamics model to reveal the intricate feedback loops and interdependencies affecting retention. The qualitative model serves as a foundation for proposing policy recommendations designed to improve officer retention rates by addressing systemic issues and enhancing overall career satisfaction. The insights gained from this research highlight the importance of a holistic and interconnected approach to policy development, emphasizing the need for sustained efforts to stabilize and improve the retention system in the Army.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge</title>
<link href="https://hdl.handle.net/1721.1/155510" rel="alternate"/>
<author>
<name>Alharbi, Meshal</name>
</author>
<id>https://hdl.handle.net/1721.1/155510</id>
<updated>2024-07-09T03:22:40Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Sample Efficient Reinforcement Learning with Partial Dynamics Knowledge
Alharbi, Meshal
The problem of sample complexity of online reinforcement learning is often studied in the literature without taking into account any partial knowledge about the system dynamics that could potentially accelerate the learning process. In this thesis, we study the sample complexity of online Q-learning methods when some prior knowledge about the dynamics is available or can be learned efficiently. We focus on systems that evolve according to an additive disturbance model where the underlying dynamics are described by a deterministic function of states and actions, along with an unknown additive disturbance that is independent of states and actions. In the setting of finite Markov decision processes, we present an optimistic Q-learning algorithm that achieves Õ(√T) regret without polynomial dependency on the number of states and actions under perfect knowledge of the dynamics function. This is in contrast to the typical Õ(√SAT) regret for existing Q-learning methods. Further, if only a noisy estimate of the dynamics function is available, our method can learn an approximately optimal policy in a number of samples that is independent of the cardinalities of state and action spaces. The sub-optimality gap depends on the approximation error of the noisy estimate, as well as the Lipschitz constant of the corresponding optimal value function. Our approach does not require modeling of the transition probabilities and enjoys the same memory complexity as model-free methods.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Refugee housing in the United States: improving the Refugee-Welcoming Rental Market</title>
<link href="https://hdl.handle.net/1721.1/155504" rel="alternate"/>
<author>
<name>Landis, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/155504</id>
<updated>2024-07-09T03:35:19Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Refugee housing in the United States: improving the Refugee-Welcoming Rental Market
Landis, Joseph
In this thesis, I explore the vital connection between the US Reception &amp; Placement (USRAP) refugee resettlement program and the health of the US’s refugee-welcoming rental market (RWRM). I focus on USRAP because it plays a unique and vital role in the international protection system for refugees and because the Biden Administration pledged in 2021 to scale the system’s capacity to resettle refugees back up to pre-2017 levels. The housing system within USRAP is understudied compared to those of much-smaller refugee resettlement programs in Europe, Australia and Canada. Recently, however, housing access has imposed an unignorable operational parameter on USRAP as evidenced by resettlement agency (RA) implementation offices falling more than 50% short of housing placement targets in FY 2023. In this thesis, I engage with the issue of USRAP-based refugee housing from a constructivist perspective, identifying and analyzing the systems and processes that shape a prospective RWRM match to search for practical changes that may lead to improvement. Informed by both a desk review and field research with RWRM stakeholders, I present a fictional narrative case study with two scenarios illustrating two frequent stories of RWRM matching in the housing journey of USRAP clients. The case shows that a wide variety of factors can drastically impair the rent capacity and, by extension, the open-market matching prospects for RWRM households. I then explore other key issues experienced on either side of the RWRM when matching and identify three major challenges: guaranteeing unit-tenant fit, managing risk perception amongst landlords, and streamlining the RWRM tenant placement process. To improve efficiency in tenant placement, resettlement agencies can harness better information systems and adopt clearer processes when liaising with landlords. Addressing the gaps in finances or knowledge that impede unit-tenant fit and landlords’ perceptions of renting to refugees must involve fostering partnerships with third party service providers. I identify six opportunities for stakeholders and partners to fortify the RWRM, and I consider the role of a social impact start-up called ReHome that I founded in 2023 to serve as a marketplace platform bringing new RWRM partnerships together. Finally, I consider what additional possibilities might open up within local rental markets if the government were to orient USRAP rental housing access toward non-market housing providers. USRAP has a unique ability to optimize the initial access step of the housing journey of the individuals it resettles because it is the global resettlement program that is most involved in the practicalities of rental market matches and because it is insulated geographically from refugee-producing countries.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PFAS and the Future of Rural Land</title>
<link href="https://hdl.handle.net/1721.1/155503" rel="alternate"/>
<author>
<name>Simon, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/155503</id>
<updated>2024-07-09T03:14:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">PFAS and the Future of Rural Land
Simon, Sarah
In this thesis, I will explain how the problem of PFAS chemical contamination on farmland in the United States emerges from and connects with other environmental, agricultural and economic challenges in rural areas. Using Maine as a case study, I will evaluate the policy response that took place in 2021-23, and consider what lessons can be learned from Maine’s approach that might apply at a national scale. The thesis concludes with a description of possible futures for contaminated land as a way of exploring the implications of different approaches to dealing with the contamination.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantification of Elastic Incompatibilities at Triple Junctions via Physics-Based Surrogate Models</title>
<link href="https://hdl.handle.net/1721.1/155502" rel="alternate"/>
<author>
<name>Rau, Aaditya</name>
</author>
<id>https://hdl.handle.net/1721.1/155502</id>
<updated>2024-07-09T03:54:38Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Quantification of Elastic Incompatibilities at Triple Junctions via Physics-Based Surrogate Models
Rau, Aaditya
Stresses at grain boundaries resulting from elastic incompatibilities have long been known to drive the premature failure and loss of desirable macroscopic properties in polycrystalline materials. As a result, there have been significant efforts in the field of grain boundary engineering to understand the sources of grain boundary incompatibilities in polycrystals and potential mitigation strategies through microstructure manipulation. Thus, understanding the relationship between grain incompatibility and failure is important for the practical use of polycrystalline materials. Surrogate models based on machine learning methods have gained broad popularity due to their ability to furnish a functional, albeit approximate, description of complex phenomena. The goal of this thesis is to predict quantitative metrics of incompatibility from various triple junction configurationsusingasurrogatemodel. High-fidelityfiniteelementsimulationsofacubic-crystal triple junction under hydrostatic extension were used to generate a synthetic dataset for training the surrogate model. A set of &#119869; integrals computed around microcracks placed along the triple junction boundaries were used to quantify the elastic incompatibilities between the grains. A multi-layer perceptron network was trained using the grain rotation angles and &#119869; integrals as the feature and label data respectively. We demonstrate that the trained network establishes an accurate functional dependence between the triple junction angles and the &#119869; integrals. We use the surrogate model to efficiently sweep the configuration space and create contour maps of the largest stress intensification at the triple junction as a function of the grain rotation angles. Furthermore, we show that the surrogate model can be utilized to identify the most and least compatible triple junction configurations via optimization. These configurations are then compared to those identified as favorable through the theory of coincident site lattices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Dark Patterns in UI/UX Elements of Digital Platforms</title>
<link href="https://hdl.handle.net/1721.1/155501" rel="alternate"/>
<author>
<name>Jain, Anukriti</name>
</author>
<id>https://hdl.handle.net/1721.1/155501</id>
<updated>2024-07-09T03:03:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analysis of Dark Patterns in UI/UX Elements of Digital Platforms
Jain, Anukriti
5.35 billion people (the equivalent of 66.2 percent of the world’s population) are using the internet as of January 2024. As the number of users on the internet is growing and the attention span of an internet user has reduced to 8 seconds, one of the challenges that digital businesses face is acquiring, engaging, and retaining users for their products/services. Some companies employ user interface elements on their websites or apps to trick users into signing up or buying a product. Dark patterns are the tricks used by apps and websites that push users into doing things they didn’t intend to, like signing up for a service or making a purchase.&#13;
&#13;
This thesis covers different types of dark patterns, including roach motel, malicious nudging, urgency/scarcity, bait and switch, and confirm-shaming. Dark patterns are also organized into “pressure” and “trickery” categories. Companies leverage dark patterns to meet their business goals, but it is critical to understand the long-term impact of using dark patterns. This thesis explores the possibility of helping users find these patterns and making them vigilant about these dark patterns. These deceptive patterns are common in web flows and are not easily detectable for many people visiting websites. There is a need to build an intervention to create consciousness about dark patterns. This thesis aims to make users aware of dark patterns by building a Chrome extension that will focus users' attention on the information provided and make them aware of dark patterns. First, we will focus on developing a Chrome extension for detecting scarcity/urgency dark patterns.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Climate Change through a Community Definition of Resilience: Qualitative Analysis of Interviews and Implications for Practice</title>
<link href="https://hdl.handle.net/1721.1/155493" rel="alternate"/>
<author>
<name>Nakagawa, Anisha Patil</name>
</author>
<id>https://hdl.handle.net/1721.1/155493</id>
<updated>2024-07-09T03:45:29Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Understanding Climate Change through a Community Definition of Resilience: Qualitative Analysis of Interviews and Implications for Practice
Nakagawa, Anisha Patil
This thesis explores how residents in low-income, rapidly gentrifying neighborhoods conceptualize resilience to climate change and what responses are desired. As part of a Participatory Action Research study in Eastern Massachusetts, I analyzed de-identified interviews with residents and engaged in collaborative data analysis sessions with Resident Researchers. Residents in these communities experience climate change through chronic stressors, mainly through heat, high utility bills, and flooding. They connect climate resilience to other stressors in their lives like displacement, structural racism, and trauma, and they see strong community ties as a key piece of resilience. Based on this research, responses to climate change need to consider the root causes of unjust systems, respond to the co-stressors in people’s lives, and have community ownership and control in order to be most effective.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ingestible Electronics for High Quality Gastric Neural Recordings</title>
<link href="https://hdl.handle.net/1721.1/155492" rel="alternate"/>
<author>
<name>Gierlach, Adam Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/155492</id>
<updated>2024-07-09T03:06:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Ingestible Electronics for High Quality Gastric Neural Recordings
Gierlach, Adam Matthew
Recent advances in understanding the gut-brain axis, functional gastrointestinal disorders, and gastric stimulation therapies have highlighted the importance of the electrical signals that regulate the gastrointestinal (GI) tract.  Current systems for measuring neural signals from the GI tract involve acute, invasive procedures that change the underlying electrical behaviors or cutaneous recordings which measure highly attenuated signals.  This thesis describes the development of a non-invasive device for long term gastric recordings in freely moving patients known as Multimodal Electrophysiology via Ingestible Gastric Untethered Tracking (MiGUT).  The custom device and electrodes are designed to conform to the stomach wall, wirelessly transmit high quality signals, all while fitting in an ingestible form capable of being easily delivered into the GI tract.  MiGUT is shown to record the gastric slow wave in-vivo in pigs, along with signals that align with the heart and respiratory rate, and measure the expected response to prokinetic therapeutics.  Multi-day measurements were obtained using MiGUT in a freely moving pig, recording changes in the slow wave during different behaviors with no artifacts observed during ingestion or movement.  This type of data could enable a new level of understanding of one’s GI tract, for health tracking and personalized diagnostics.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling the Affect of “Aha!" Moments to Detect the Moment of Learning</title>
<link href="https://hdl.handle.net/1721.1/155491" rel="alternate"/>
<author>
<name>Adler, Eden</name>
</author>
<id>https://hdl.handle.net/1721.1/155491</id>
<updated>2024-07-09T03:30:05Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling the Affect of “Aha!" Moments to Detect the Moment of Learning
Adler, Eden
What if a model could pinpoint the exact moment of learning? Currently, the only way we can understand when someone has learned is by testing them afterwards, which has its limitations. In attempts to detect the moment of learning, researchers from various fields have leveraged data from methods such as Knowledge Tracing (KT) and Electroencephalograms (EEGs) to predict students’ knowledge acquisition. These methods have contributed to improving our understanding of knowledge, but not only do they fall short of detecting the exact moment of learning, they also interfere with natural learning interactions by requiring students to wear sensors or type as they learn. Often, modeling learning does not include affect and emotion data, which are key influencers of learning outcomes. One affective expression that is often observed by educators, and has evaded quantification attempts by researchers, is the moment everything suddenly clicks for the student- the “Aha!” moment. Using classroom video data of students experiencing “Aha!” moments, we created dynamic, functional handcrafted features representing the face and body position and used them to model students’ facial expressions. We then leveraged feature selection methods and statistical analysis to ultimately contribute a novel, explainable definition of the observable, affective markers of “Aha!” moments, unlocking the opportunity to use the “Aha!” moment as a signal for detecting the moment of learning. These results invite future interdisciplinary research efforts as well as applications in fields such as artificial intelligence, human-robot interaction, education, psychology, cognitive sciences, and more.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytics for Healthcare Operations: Machine Learning&#13;
to Improve Emergency Department Patient Flow</title>
<link href="https://hdl.handle.net/1721.1/155489" rel="alternate"/>
<author>
<name>Kyle, Thomas D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155489</id>
<updated>2024-07-09T03:48:57Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Analytics for Healthcare Operations: Machine Learning&#13;
to Improve Emergency Department Patient Flow
Kyle, Thomas D.
Over the last several years, the Emergency Department (ED) at Massachusetts General Hospital (MGH) has been experiencing a significant increase in demand for hospital services. Overcrowding in the ED and high utilization of inpatient floors are symptoms of this increase in demand. Chapter 2 shows that data available at the time of an inpatient bed request can be used to prospectively identify ED patients who are sufficiently sick to require hospitalization, but are likely to be discharged within 2 nights of the admission decision. The resulting XGBoost classification model is being implemented as a decision support tool for clinicians who would be deciding whether to send this cohort of SS patients to a short-stay unit (SSU). The SSU would allow for more effective and timely care of this class of patients, thus helping to alleviate both ED overcrowding and inpatient floor utilization. The model exhibits an out-of-sample AUC of 0.81 and its scores are inversely correlated with the observed LOS as desired. Then, Chapter 3 investigates a generic service system that captures typical healthcare settings, in which a hospital has to manage bed assignment in the face of bed requests from patients with different characteristics. The service system (e.g., a hospital) must decide whether to accept or reject service requests instantaneously. The work describes an approximate dynamic programming approach to solve for admission control policies that consider LOS forecasts in admission decisions. The resulting LOS-considerate policy with perfect LOS forecasts allows the generic hospital to increase its daily revenue (or other value-&#13;
based metric) by 5.5% compared to a policy that does not consider LOS forecasts. This value added increases as the LOS forecasts become more accurate. This illustrates the benefits of using LOS forecasts in hospital resource allocation decisions and investment in accurate LOS forecasting.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System-Theoretic Process Analysis of a Novel Airborne Laser Communication System</title>
<link href="https://hdl.handle.net/1721.1/155488" rel="alternate"/>
<author>
<name>Bishop, Brittany E.</name>
</author>
<id>https://hdl.handle.net/1721.1/155488</id>
<updated>2024-07-09T03:12:43Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System-Theoretic Process Analysis of a Novel Airborne Laser Communication System
Bishop, Brittany E.
As the military strives to create a more robust battle network, laser communication offers many advantages such as supporting more secure and efficient data sharing. For this reason, interest has grown in recent years in implementing lasercom as a means for intra-aircraft communication. However, many challenges unique and inherent to lasercom such as stringent line-of-sight and pointing requirements and susceptibility to atmospheric degradation lead to difficulties in implementation. Consequently, establishing and maintaining lasercom links in the dynamic environment of flight will require seamless coordination between aircraft. The complexity and novelty of such a system warrant a hazard analysis technique that can fully address the associated challenges of collaboration while the system is in an early concept phase of design. System-Theoretic Process Analysis (STPA) is a proactive hazard analysis technique rooted in Systems Theory. While more traditional hazard analysis methods evaluate the safety of system components individually, STPA provides guidance to analyze systems holistically, thus supporting the identification of emergent behaviors that arise due to component interactions.  Recently, STPA has been extended to address hazards specifically associated with collaboration of multiple controllers providing shared control over a physical process. This extension known as STPA-Teaming provides a methodology to analyze unsafe combinations of control actions that may lead to system losses. The method allows for the systematic identification of causal factors related to coordination that are likely to be missed by more traditional hazard analysis techniques. Because this approach relies on abstraction and includes human operators along with software and hardware components, it is well-suited for novel, complex systems.  This thesis applies STPA and its extension, STPA-Teaming, to an early concept airborne lasercom system to identify scenarios in which loss of communication may occur. As a result, it identifies scenarios related not only to individual component failures and unsafe internal control, but also related to flaws in coordination of multiple controllers. The output of the analysis is system recommendations that can support the remainder of the systems engineering process including generation of system requirements, definition of system concept of operations (ConOps) and system architecture, and system validation and verification (V&amp;V). In this way, the results of the analysis provide a baseline level of traceability for future design decisions to manage the emergent behavior of the system and ultimately prevent mission losses.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Evaluation of Contrail Models</title>
<link href="https://hdl.handle.net/1721.1/155484" rel="alternate"/>
<author>
<name>Xu, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/155484</id>
<updated>2024-07-09T03:28:30Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Development and Evaluation of Contrail Models
Xu, Michael
Condensation trails (contrails) are aircraft-induced ice clouds that are estimated to account for up to 50% of aviation’s climate impacts. Uncertainties in the impact of individual contrails have motivated the development of contrail models, such as CoCiP, a 0-D rapid assessment model, and APCEMM, a 2-D model with detailed ice microphysics. However, there are gaps within the current contrail modeling literature. There is no model both sufficiently fast for rapid assessment of contrail impacts and detailed in its ice microphysics modeling. There are few studies calibrating and validating the performance of contrail models on individual f lights. The absolute and relative magnitudes of errors due to weather data uncertainty and errors due to modeling assumptions have not been extensively studied, despite many studies relying on the CoCiP model and the ERA5 weather data for their analyses. This thesis addresses these gaps. The APCEMM model is optimized to achieve a decrease in runtime by 95% and is improved with depth estimation, vertical advection, and atmospheric turbulence modules. A set of 152 flight-attributed LIDAR cross sections is assembled to compare APCEMM and CoCiP results against individual contrail observations on metrics such as contrail width, depth, and optical depth. A method dubbed “ambient parameter inference”, where contrail models infer the meteorological conditions necessary to reproduce a contrail observation, is developed to produce estimated distributions of ambient parameters. These distributions are used to analyze model sensitivities, biases in the weather data, and errors due to weather data uncertainty and modeling assumptions. I find that the distributions of the wind shear and vertical humidity profile as inferred by APCEMM have means and medians within the range of radiosonde measurements of these quantities, suggesting that the model adequately accounts for the sensitivities of contrail properties to these parameters. Compared to the APCEMM-inferred parameters, the ERA5 weather data predicts a 3.8 times higher average supersaturated layer depth and a 56% lower wind shear, suggesting systematic biases. CoCiP infers on average a 39% lower supersaturated layer depth and a 3.0 times higher ice supersaturation level compared to APCEMM. Due to the APCEMM-inferred parameters’ closer agreement with radiosonde measurements, this suggests that there may be modeling errors due to CoCiP’s inability to resolve the contrail’s vertical profile and its lower sensitivity to relative humidity. Errors in the ambient humidity data are found to possibly account for an over 100% average absolute error in optical depth when using APCEMM, greater than the 72.5% attributable to CoCiP modeling limitations. APCEMM is found to predict contrails with a 29.3% longer average lifetime and a 4.34-5.92 times average higher energy forcing compared to CoCiP when using the ERA5 weather data. This suggests that inter-model disagreement is on the same order of magnitude as the already known errors resulting from meteorological data gaps.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Harnessing Intelligent Audio-Gesture Interfaces For Wearables As A Sleep Aid</title>
<link href="https://hdl.handle.net/1721.1/155483" rel="alternate"/>
<author>
<name>Jacobs Luengo, Daniel Alberto</name>
</author>
<id>https://hdl.handle.net/1721.1/155483</id>
<updated>2024-07-09T04:02:37Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Harnessing Intelligent Audio-Gesture Interfaces For Wearables As A Sleep Aid
Jacobs Luengo, Daniel Alberto
Insomnia—difficulty in initiating and maintaining sleep—affects a significant portion of the global population. The mainstream adoption of wearable computing presents a unique opportunity to study and aid sleep at an individual level. Here we introduce Zzzonic, a smart sleep-aid application designed for smartwatches that leverages cognitive psychology and human-computerinteraction (HCI) to facilitate sleep onset by engaging users in audio tasks as a formof intrusive thought control. A significant aspect of Zzzonic's functionality is its adaptive control system, which estimates sleep onset latency in realtime by monitoring indicators such as motion anduser response. The system then progressively modifies the characteristics of the audio tasks to minimize sleep onset latency. This thesis evaluates Zzzonic through a series of user trials conducted throughout the development of the app, accessing the capacity to predict and control sleep onset. The results indicate accurately predicting sleep onset latency in realtime as a control signal is possible but there was no evidence indicating the system could minimize slope onset latency. The inclusion of more indicator signals and machine learning techniques is likely to significantly improve realtime sleep onset latency prediction. Future work on computer-modulated intrusive thought control would benefit from the evaluation of task design, intrusive thought indicators and identifying an adequate control framework.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Community-Based Approach for Hub Placements</title>
<link href="https://hdl.handle.net/1721.1/155480" rel="alternate"/>
<author>
<name>Chavalithumrong, Alissa</name>
</author>
<id>https://hdl.handle.net/1721.1/155480</id>
<updated>2024-07-09T03:38:09Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Community-Based Approach for Hub Placements
Chavalithumrong, Alissa
Advanced Air Mobility (AAM) is a rapidly emerging sector in the aerospace industry that seeks to revolutionize transportation by integrating highly automated aircraft into the airspace. As AAM technology matures, establishing a network framework and strategic hub locations becomes crucial for transitioning from theoretical models to practical applications in transportation systems. This thesis investigates community-based strategies for hub placement within the AAM infrastructure. More specifically, it utilizes network segmentation to decompose a network into communities to simplify the hub selection process into more manageable sub-problems. Our first contribution is the development of a specialized community detection methodology called Directed Flow Communities (DFC), which is designed to accommodate the attributes of transportation networks. Next, we conduct a case study using the Freight Analysis Framework (FAF) dataset as a proxy for AAM demand. The empirical investigation focuses on three key sectors: pharmaceuticals, electronics, and comprehensive freight flows, each presenting distinct challenges and insights into the network’s structure. The findings show the effectiveness of the community detection-based methods in unveiling cost-efficient hub locations.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reinforcement Learning for Cybersecurity Risk Assessment of Advanced Air Mobility Systems</title>
<link href="https://hdl.handle.net/1721.1/155472" rel="alternate"/>
<author>
<name>Pieper, Brenton A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155472</id>
<updated>2024-07-09T03:34:32Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Reinforcement Learning for Cybersecurity Risk Assessment of Advanced Air Mobility Systems
Pieper, Brenton A.
Modern AI/ML tools have significant potential to accelerate the development of Advanced Air Mobility (AAM) systems that use unmanned aerial systems for providing mobility services. The efficacy of these systems relies on highly granular, reliable, and trustworthy sensor data. This thesis is motivated by the need to assess safety risks due to cyber vulnerabilities in the surveillance components of AAM systems such as Automatic Dependent Surveillance-Broadcast (ADS-B) and the Airborne Collision Avoidance System (ACAS). We focus on spoofing attacks targeted at specific AAM agents and develop a computational approach to evaluate the impact of such attacks on the performance of cooperative agents modeled in a Multi-Agent Reinforcement Learning (MARL) framework. Our threat model is particularly suited for quantifying the safety risks of nominally trained MARL algorithms under attacks by an adversary capable of compromising observational data of a single target agent. In contrast to prior work in Adversarial RL, our approach to creating adversarial perturbations does not require access to learning and control mechanisms internal to the compromised agent. We show how realistic spoofing attacks can be successfully constructed using a simulated MARL-based AAM system, called AAM-Gym. We then conduct a safety risk analysis of such attacks using commonly accepted aviation safety metrics. Specifically, we find that safety compliance decreases across multiple aircraft densities under a spoofing attack to a single agent, owing to higher risk of Near Mid-Air Collision (NMAC). Finally, to understand possible algorithmic defenses, we take inspiration from Safe RL and show how AAM agents can be made more robust, and hence more safety compliant, to observational spoofing by using a minimax training criterion. Our work highlights the need to rigorously study the safety risks of AAM systems under realistic cyber threat models. Our findings can benefit efforts to develop practical defense techniques, such as signal validation and filtering, to detect the presence of adversarial perturbations, and control algorithms to adapt and respond to safety compromises in a timely manner.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automating Heterogeneous Parallelism in Numerical Differential Equations</title>
<link href="https://hdl.handle.net/1721.1/155471" rel="alternate"/>
<author>
<name>Utkarsh</name>
</author>
<id>https://hdl.handle.net/1721.1/155471</id>
<updated>2024-07-09T03:37:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Automating Heterogeneous Parallelism in Numerical Differential Equations
Utkarsh
Scientific computing is an amalgamation of numerical methods and computer science. Developments in numerical analysis have allowed stable and accurate numerical schemes, whereas computer algorithms have been successfully adopted to standard multicore systems of today, enabling parallelism. Combining efficient numerical algorithms with efficient parallelism presents a challenge mainly due to the independent development of these fields and is, therefore, typically solved on a domain-specific basis by domain experts. The development of general-purpose tools that integrate parallelism into algorithms, accessible through highlevel languages, signifies the future direction for addressing computational demands across various domains. This thesis work represents a culmination of efforts in general-purpose parallel numerical algorithms for solving differential equations. We make them accessible by choosing the Julia programming language to implement the high-level framework. Solving differential equations appears to be an intrinsically serial process due to progressive time-stepping that proves challenging to parallelize. Most of the approaches are linked to two broad categories; The first is the parallelism of the solver operations by making each solve faster, and the latter is the parallelism between the solves, i.e., solving multiple batches at a time. We automate the parallelization process in both these domains while keeping the algorithms general-purpose. Parallelization with different hardware accelerators, such as CPUs and GPUs, is also investigated. Parallelism for sufficiently large stiff ODEs is traditionally linked to the parallelization of the matrix factorization stage. However, these methods still need to overcome the threading overhead for ODEs having less than approximately 200 states. We propose implementing adaptive-order, adaptive time-stepping stiff ODE solvers such as extrapolation methods, which can parallelize a single instance of an ODE solve even for small ODEs. The other need for parallelization of ODE solvers arises from solving ODEs for batches of data, a typical workflow in inverse problems, global sensitivity analysis, and uncertainty quantification. Traditionally, GPU-accelerated ODE solvers were specially developed for high-dimensional PDE systems, which can be easily adapted for batched ODE solvers. The approach for parallelization is to convert an array-based ODE solver to work with GPU-based arrays. These approaches have shortcomings, such as implicit synchronization of time steps for all the ODEs and GPU overheads. We propose that these approaches can be improved significantly where GPU acceleration for ODE solvers is device-agnostic, general-purpose, and accessible from a high-level language.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Human-Computer Interaction-Driven Inquiry on the Extent to Which Web-Based Trust Signals Adequately Represent the Risk of Interactions on the Web</title>
<link href="https://hdl.handle.net/1721.1/155470" rel="alternate"/>
<author>
<name>Ocampo, Javier Adrian L.</name>
</author>
<id>https://hdl.handle.net/1721.1/155470</id>
<updated>2024-07-09T03:07:46Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Human-Computer Interaction-Driven Inquiry on the Extent to Which Web-Based Trust Signals Adequately Represent the Risk of Interactions on the Web
Ocampo, Javier Adrian L.
As the global population increasingly relies on internet-based products, services, and platforms, users are becoming more vulnerable to unintended consequences. One such consequence is the increased susceptibility to malicious actors and misinformation online. This vulnerability escalates as online interactions become more sophisticated, with users increasingly depending on the internet for complex needs like social activity, banking, and education. These interactions often involve exchanges of personal data, information, and monetary assets, which have become targets for malicious actors. This thesis examines a key point of vulnerability: the user interfaces and interaction components, referred to as "trust signals," are used to assess the trustworthiness of other users and information on these platforms. The research seeks to highlight the importance of trust signals in creating secure and reliable online environments, as well as explore how poorly designed trust signals can undermine trust and contribute to instability. To uncover latent needs and insights regarding trust signals, a human-centered design process was employed as the methodology. This approach facilitated understanding of user behaviors and preferences through iterative user research and design exploration. The thesis reveals two key findings. First, the human-centered design process showed that users rely on social proofs within trust signals, often basing their trust on their understanding of the recommender's perspective. Second, users are susceptible to relying on inadequate social proof proxies, such as like counts, follower counts, or Discord server member counts, to evaluate trustworthiness in contexts for which these signals were not intended.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems</title>
<link href="https://hdl.handle.net/1721.1/155468" rel="alternate"/>
<author>
<name>Han, Jessy Xinyi</name>
</author>
<id>https://hdl.handle.net/1721.1/155468</id>
<updated>2024-07-09T03:42:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems
Han, Jessy Xinyi
We are interested in developing a data-driven method to evaluate race-induced biases in law enforcement systems. While the recent works have addressed this question in the context of police-civilian interactions using police stop data, they have two key limitations. First, bias can only be properly quantified if true criminality is accounted for in addition to race, but it is absent in prior works1. Second, law enforcement systems are multi-stage and hence it is important to isolate the true source of bias within the “causal chain of interactions” rather than simply focusing on the end outcome; this can help guide reforms. In this work, we address these challenges by presenting a multi-stage causal framework incorporating criminality. We provide a theoretical characterization and an associated datadriven method to evaluate (a) the presence of any form of racial bias, and (b) if so, the primary source of such a bias in terms of race and criminality. Our framework identifies three canonical scenarios with distinct characteristics: in settings like (1) airport security, the primary source of observed bias against a race is likely to be bias in law enforcement against innocents of that race; (2) AI-empowered policing2, the primary source of observed bias against a race is likely to be bias in law enforcement against criminals of that race; and (3) police-civilian interaction, the primary source of observed bias against a race could be bias in law enforcement against that race or bias from the general public in reporting (e.g. via 911 calls) against the other race. Through an extensive empirical study using police-civilian interaction (stop) data and 911 call data, we find an instance of such a counter-intuitive phenomenon: in New Orleans, the observed bias is against the majority race and the likely reason for it is the over-reporting (via 911 calls) of incidents involving the minority race by the general public.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-fidelity Modeling and Reinforcement Learning for Energy Optimal Planning</title>
<link href="https://hdl.handle.net/1721.1/155466" rel="alternate"/>
<author>
<name>de Castro, Luke</name>
</author>
<id>https://hdl.handle.net/1721.1/155466</id>
<updated>2024-07-09T04:00:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Multi-fidelity Modeling and Reinforcement Learning for Energy Optimal Planning
de Castro, Luke
Modeling the energy consumption of a quadrotor involves complex electrical and physical dynamics, making it difficult to optimize over. We present a sequence-to-sequence multi-fidelity Gaussian process (MFGP) to learn a data-driven model to predict the energy required to fly a given vehicle trajectory. The goal is to create an accurate energy prediction that minimizes the number of expensive high fidelity simulations required for training. The MFGP algorithm can incorporate many low accuracy samples from a simple motor model with a few computationally demanding battery simulations to create a single accurate energy prediction. We perform sample efficiency experiments, finding a single fidelity model often needs 10 times more high fidelity data to match the accuracy achieved by the MFGP. The energy prediction model is then applied to a reinforcement learning (RL) agent, providing a reward signal to a minimum energy trajectory planner. The RL policy generates more energy efficient trajectories than those found by a nonlinear optimization baseline method, and we compare it to a minimum time RL model to show that the energy efficient policy is non-trivial.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Techno-Economic Analysis of Fusion Energy in the European Electricity Market</title>
<link href="https://hdl.handle.net/1721.1/155465" rel="alternate"/>
<author>
<name>Duitemeijer, Mart</name>
</author>
<id>https://hdl.handle.net/1721.1/155465</id>
<updated>2024-07-09T03:59:21Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Techno-Economic Analysis of Fusion Energy in the European Electricity Market
Duitemeijer, Mart
This study explores the potential of fusion energy in the decarbonization of the European Union its electricity market. The study simulates various scenarios by employing the GenX least-cost optimization model, factoring in different investment costs and emission caps. The research addresses how fusion energy could influence total system costs, electricity prices, and competitiveness against other technologies. Results indicate that if introduced at low investment costs, fusion can transform the electricity system, acting as a baseload power source and altering investment dynamics. Conversely, in scenarios where fusion investment costs are higher, the results predict a diversified electricity mix dominated by renewables like wind and solar, complemented by gas with Carbon Capture and Storage (CCS) and battery storage to manage intermittency and maintain grid stability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Approach to Fault Management Design for the Proposed Mars Sample Return EDL and Ascent Phase Architectures</title>
<link href="https://hdl.handle.net/1721.1/155424" rel="alternate"/>
<author>
<name>Mao, Cici</name>
</author>
<id>https://hdl.handle.net/1721.1/155424</id>
<updated>2024-06-28T03:50:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">An Approach to Fault Management Design for the Proposed Mars Sample Return EDL and Ascent Phase Architectures
Mao, Cici
The Mars Sample Return (MSR) campaign aims to bring Martian regolith samples back to Earth. JPL is currently developing the Sample Retrieval Lander (SRL) to receive the samples collected by the Perseverance rover and launch them into Mars orbit using a Mars Ascent Vehicle (MAV) for future Earth return. The telecommunications delay from Earth to Mars requires autonomy on-board the spacecraft for different phases of the mission like Entry, Descent \&amp; Landing (EDL) and MAV Launch given limited possible operator intervention. Fault protection (FP) encapsulates these autonomous system behaviors, which aim to protect the spacecraft by limiting or detecting and responding to anomalies. In order to provide sufficient coverage to the possible faults a system may encounter, multiple FP analyses are needed to identify and analyze the fault set of a system to guide future design iterations. This thesis focuses on three tools: Fault Containment Region (FCR), Failure Mode Effects \&amp; Criticality Assessment (FMECA), and Fault Tree Analysis (FTA). FCRs are used to identify the boundaries at which faults can occur and propagate in a system, making them useful tools for defining functional boundaries in a system and identifying areas that are single-string, or have no redundancy. FMECAs and FTAs use a bottoms-up and top-down approach, respectively, to identify possible faults and the associated consequences and impacts of each anomaly; together, these tools provide a comprehensive fault set to be used in FP architecture design. Using these tools demonstrates how FP design factors into engineering trades – monitoring or additional redundancy adds additional cost and complexity – and thus the results of these analyses need to be used iteratively with the system design to determine the best approach. As such, it’s shown that a majority of EDL and MAV Launch elements are single-string, and while there are opportunities of adding redundancy in EDL sensors, there are few options for MAV Launch given its engineering constraints. While both phases have little redundancy, the option space for EDL is better known given JPL’s multiple successful past landings. Future work should conceptualize possible areas of added redundancy to the MAV to lower overall mission risk.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Silicon Photomultipliers as Free Space Optical Communication Sensors</title>
<link href="https://hdl.handle.net/1721.1/155423" rel="alternate"/>
<author>
<name>Gallo, Leonardo de la</name>
</author>
<id>https://hdl.handle.net/1721.1/155423</id>
<updated>2024-06-28T03:05:11Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Silicon Photomultipliers as Free Space Optical Communication Sensors
Gallo, Leonardo de la
Free-space optical communications (FSOC) is a growing field that presents an attractive alternative to the the current technology standard of radio frequency (RF) communications. Typical optical carriers have smaller SWaP in comparison to RF systems due to the difference in required aperture sizes. The narrower beam divergence of optical wavelengths also results in a higher power efficiency for long range communication links. This improvement in performance can be leveraged by platforms constrained by size, such as satellites. For the past ten years, the number of satellites launched has increased by an order of magnitude, with smallsats currently making up 96% of the launched vehicles. The use of FSOC terminals for smallsats enables higher data rates but requires precise pointing. An example nanosatellite FSOC mission, NASA’s CubeSat Laser Infrared CrosslinK (CLICK) B/C addresses this by using a beacon-based pointing, acquisition, and tracking (PAT) system to correct for angular misalignment while an avalanche photodiode (APD) receiver detects the communication signal. The high gain of the APD allows the communication signal to be detected at link distances ranging from 25km to 580km for CLICK-B/C. In this work, we consider whether higher sensitivities may be achieved by using a Silicon photomultiplier (SiPM) as a receive optical sensor. SiPMs are arrays of APDs operated in Geiger mode, characterized by nanosecond output pulses and gains in the order of 106 electrons per photon. This thesis proposes using a SiPM in a 2x2 pixel configuration as a dual pointing and communication sensor for FSOC terminals in LEO. In this configuration the misalignment of the optical signal between the transmit and receive terminals can be directly measured by the SiPM, eliminating the need for a dedicated beacon laser and quadcell detector used for PAT. This reduces the overall SWaP of the communication terminal by a factor of 2. The pointing performance of the proposed SiPM configuration is characterized by calculating the noise equivalent angle (NEA) of the detector through simulation and experiment, and the communication performance is evaluated by testing the maximum detectable pulsing frequency of a laser. The simulation results support an NEA of 1µrad and a maximum detectable pulsing rate of 2GHz.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of Cavity Geometry to Improve Optical Quality of Windows in Hypersonic Flow</title>
<link href="https://hdl.handle.net/1721.1/155422" rel="alternate"/>
<author>
<name>Schofield, Matthew</name>
</author>
<id>https://hdl.handle.net/1721.1/155422</id>
<updated>2024-06-28T03:53:55Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Study of Cavity Geometry to Improve Optical Quality of Windows in Hypersonic Flow
Schofield, Matthew
The optical quality of the window-air system of a flight vehicle in hypersonic flow is simulated. The optical distortion of the window-air system is the metric of merit. Within the earth’s atmosphere, vehicles at hypersonic speeds may generate viscous and high-temperature thermal boundary layers. These boundary layers induce a nonuniform displacement of temperature, density, and fluid velocity over the window-sensor system leading to a degradation of optical quality of the system. The heat f lux into the system is simulated for various geometries (length-to-depth ratios). Computer-simulated flow fields and time-development of different measures of optical quality are produced using US3D. Conjugate heat transfer is used for simulation of solid temperature development, with materials Aluminum-6061 for the vehicle solid (frame) and Sapphire (Al₂O₃) for the window. Optimal window-air system configurations are discussed for a Mach 7 vehicle at 20 km.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Traveling Salesman Problem in Multi-Agent Systems with Practical Constraints</title>
<link href="https://hdl.handle.net/1721.1/155420" rel="alternate"/>
<author>
<name>Yang, Ruixiao</name>
</author>
<id>https://hdl.handle.net/1721.1/155420</id>
<updated>2024-06-28T03:03:12Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Optimizing Traveling Salesman Problem in Multi-Agent Systems with Practical Constraints
Yang, Ruixiao
The Traveling Salesman Problem (TSP) is a fundamental challenge in multi-agent systems, particularly in task allocation scenarios. Traditional models considering the unconstrained multi-agent TSP, which require multiple salesmen to visit all customers collectively, often fail to produce feasible solutions for real-world applications due to practical constraints. To address this gap, we explore two prevalent constraints: energy limitations and aerial robot collaboration. We introduce two novel formulations: the Multi-Agent EnergyConstrained TSP (MA-ECTSP) and the Multi-Agent Flying Sidekick TSP (MA-FSTSP). The MA-ECTSP considers constraints such as limited battery levels and inter-agent conf licts at replenishment sites, while the MA-FSTSP models scenarios where multiple trucks, each equipped with several drones, collaborate to visit all customers, with trucks restricted to roads and drones having greater freedom in their flight paths. We propose a three-phase framework that first deconstructs these complex problems into more manageable single-agent versions, then optimizes them separately without constraints as heuristics, and finally integrates the heuristics and optimizes under the practice constraints. For the MA-ECTSP, we decompose the instance into smaller sub-problems by splitting the minimum spanning tree (MST), solve each using a combination of TSP solvers and heuristic searches, and then aggregate the tours into a feasible solution using a Mixed-Integer Linear Program (MILP) with significantly few variables and constraints. For the MA-FSTSP, we initially decompose the problem into subproblems of one truck with multiple drones, compute routes for trucks without drones, and use these in the final phase as heuristics to optimize both drone and truck routes concurrently. Our approach demonstrates significant effectiveness and scalability compared to existing baselines, as validated on real-world road networks.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Climate Impact Analysis of Direct Air Capture Deployment</title>
<link href="https://hdl.handle.net/1721.1/155419" rel="alternate"/>
<author>
<name>Housen, Tara</name>
</author>
<id>https://hdl.handle.net/1721.1/155419</id>
<updated>2024-06-28T03:05:20Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Climate Impact Analysis of Direct Air Capture Deployment
Housen, Tara
Direct air capture (DAC) is a negative emissions technology (NET) that can contribute to mitigating climate change impacts by extracting CO₂ from the atmosphere. Given its low technical maturity, uncertainties persist regarding DAC's cost, scalability, and life-cycle emissions. In this analysis I assess liquid and solid DAC technologies powered by various energy sources to provide insights into its current and future economic viability and its net GHG emissions impact. My findings show electric DAC configurations relying on renewable electricity have high carbon removal efficiency and relatively low cost. Additionally, I quantify the climate impact of large-scale global DAC deployment with investments of 0.5%, 1%, 1.5%, and 2% of the global GDP. I integrate electric DAC plants powered by renewable energy with co-located CO₂ storage sites. This analysis reveals that for scenarios with high anthropogenic emissions, DAC investments of up to 2% of global GDP cannot stabilize CO₂ concentrations. The results indicate that the 1.5°C goal can be achieved with an investment of 1.5% of the global GDP if cumulative emissions remain within 1178 GtCO₂; or with an investment of 0.5% of the global GDP if cumulative emissions remain within 981 GtCO₂. Alternatively, the 2°C goal can be achieved with an investment of 1.5% of the global GDP if cumulative emissions remain within 2750 GtCO₂; or with an investment of 0.5% of the global GDP if cumulative emissions remain within 1178 GtCO₂.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of Experiments for Contrail Avoidance</title>
<link href="https://hdl.handle.net/1721.1/155413" rel="alternate"/>
<author>
<name>Kigotho, Olivier Ng'weno</name>
</author>
<id>https://hdl.handle.net/1721.1/155413</id>
<updated>2024-06-28T03:38:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Design of Experiments for Contrail Avoidance
Kigotho, Olivier Ng'weno
Condensation trails (contrails) are line-shaped clouds that form behind aircraft and contribute more to climate change each year than any other form of aircraft emissions. While most contrails have little effect on the climate because they dissipate quickly, contrails persist when they form in parts of the atmosphere that are ice-supersaturated (ISS). These ISS regions are often shallow, and can be avoided by small deviations in altitude. However, it is expensive to test whether these deviations are effective, as conducting an experiment requires deviating commercially scheduled flights from their typical cruise altitude. Meanwhile, metrics have not been developed that can compare the costs and benefits of performing contrail avoidance deviations. This thesis shows that measuring the total contrail length avoided relative to the total length of deviations is a way to compare the costs and benefits of contrail avoidance. The results of a Monte Carlo simulation show that a paired difference test will likely reduce the necessary number of samples for statistical significance relative to a randomized control trial. On the other hand, a randomized complete block design with blocking for engine efficiency will not significantly effect the statistical power of the experiment. However, the instrument used to measure contrails will have the greatest effect on the number of samples needed because the number of samples necessary for statistical significance scales inversely proportionally to the probability that an instrument will observe a contrail. Finally, these simulations suggest that the benefit of contrail avoidance is sensitive to costs of performing deviations besides fuel burn. Therefore, a contrail avoidance policy should prioritize avoiding longer contrails over shorter ones to reduce the number of deviations necessary for a given benefit. It is expected that contrail avoidance experiments will be necessary at multiple stages of scaling up the contrail avoidance system. As a result, using these experiment designs will be useful to compare different strategies of contrail avoidance and different prediction systems. Knowing how to measuring the effect of contrail avoidance will take us one step closer to mitigating the climate impacts of the aviation industry.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On stress, strength, and failure in asteroids during planetary entry</title>
<link href="https://hdl.handle.net/1721.1/155412" rel="alternate"/>
<author>
<name>Rulko, Theo Artur</name>
</author>
<id>https://hdl.handle.net/1721.1/155412</id>
<updated>2024-06-28T03:28:47Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">On stress, strength, and failure in asteroids during planetary entry
Rulko, Theo Artur
Efforts to characterize the danger posed by asteroids have motivated an effort to model their entry and breakup in Earth’s atmosphere. These models, crucial to planetary defense efforts, necessitate an understanding of the physics underlying fragmentation — including knowledge of key governing physical properties such as strength. Recovered meteorites provide some of the best evidence for these properties. However, their measured strengths are often orders of magnitude higher than those inferred from meteor observations. In this thesis, we seek to provide a full-field description of the stresses that develop in monolithic meteors as they enter the atmosphere and deform, to shed light on the fragmentation process. To quantify those stresses, we develop a simple model of meteor entry that treats the bolide as a deformable body subject to suitable aerodynamic, inertial, and centrifugal loads. We apply these external loads via the Meteor Equations in conjunction with modified Newtonian aerodynamic theory at high Mach numbers. First, we compute an analytical series-solution to the stress field in an idealized case and show that, unlike what is classically assumed, the tensile stresses in asteroids may be as much as 20 times lower than the ram pressure. Then, we conduct finite-element simulations of meteor falls attendant to non-ideal asteroids, and show that our conclusions hold for all but the most irregularly shaped bodies, where geometric stress concentrations may cause early fragmentation. Finally, we simulate the breakup process in select cases by recourse to the discontinuous Galerkin / Cohesive Zone method, confirming that cracks nucleate in accordance with our analytical predictions. We conclude that this factor is an important parameter in the modeling of asteroid entry and fragmentation and that, in combination with Weibull-type size-strength scaling laws, may help shed some light on the observed discrepancy between meteor and meteorite strengths.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Processes for the Fabrication of SU-8 Structures and Sputtered Materials on Porous Glass for Electrospray Thruster Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155410" rel="alternate"/>
<author>
<name>Nachtigal, Catherine J.</name>
</author>
<id>https://hdl.handle.net/1721.1/155410</id>
<updated>2024-06-28T03:35:48Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Processes for the Fabrication of SU-8 Structures and Sputtered Materials on Porous Glass for Electrospray Thruster Manufacturing
Nachtigal, Catherine J.
Electrospray thrusters are electric propulsion devices that generate thrust through the use of an electric potential between the emitter, a concentrated point at which a propellant is flowed to, and downstream extractor electrodes that generates a high electric field at the emitter causing the propellant to be accelerated. Current electrospray thruster designs use sharp micron-scale cone-shaped emitters made from porous materials to generate ion emission through passive propellant feeding, but the current design has flaws that affect its lifetime, reliability, and performance. High specific impulse thruster firing occurs when operating in the purely ionic regime (PIR), in which an ionic liquid propellant (a room temperature molten salt or liquid metal) emits individual ions rather than larger droplets. These emitters must be built on the micron-scale to achieve PIR emission, resulting in their operation as large monolithic arrays with a single extractor to produce a usable amount of thrust, such that the failure of one emitter out of thousands could lead to full extractor and device failure. Futher, the broad parameter space (geometry, flow path, insulation, etc) is currently not selected according to the optimal requirements for operation in the PIR. Recent simulations show that PIR emission can be achieved in a relatively narrow domain that depends on the applied electric field, meniscus size, and hydraulic impedance for flat panel capillary emitters. These capillary emitters can be designed with individualized extractors that are connected through a series of fuses, isolating any shortage to a single emitter. Photolithography is a useful micromanufacturing tool that has not yet been utilized to build solid structures on top of porous structures. This is because a porous substrate would uptake any liquid photoresist applied during fabrication, making the susbtrate lose its porosity. To prevent this, and allow for the formation of solid structures on top of a porous substrate for electrospray thruster applications, this thesis develops a manufacturing plan in which the pores within the substrate are loaded with a volatile organic compound (VOC), allowing a structure to be fabricated on the substrate surface via photolithography, without the material entering the substrate’s pores. To regain the substrate’s porous structure, the VOC is removed post-manufacturing via sublimation and an acetone wash. Using the manufacturing techniques described in this thesis, a novel electrospray thruster design consisting of capillaries and fuses to optimize PIR performance and prevent shortage propagation is proposed to greatly increase the performance and reliability of electrospray thruster devices.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Sustainable Aviation Fuel Production Potential Using Crop Allocation Optimization</title>
<link href="https://hdl.handle.net/1721.1/155409" rel="alternate"/>
<author>
<name>Shu, Yuxin</name>
</author>
<id>https://hdl.handle.net/1721.1/155409</id>
<updated>2024-06-28T03:01:49Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessment of Sustainable Aviation Fuel Production Potential Using Crop Allocation Optimization
Shu, Yuxin
Sustainable aviation Fuel (SAF) has been recognized as a viable solution in the near to medium future for decreasing carbon emissions in the aviation industry. Global SAF production, however, is limited and falls well short of the International Air Transport Association’s (IATA) goal to achieve net-zero carbon emissions in 2050. This thesis quantifies the global SAF production potential through different crop allocation strategies. Biomass potential is quantified by land suitability and agricultural availability. An optimization model is developed using binary integer linear programming with three crop allocation strategies for 2050 and 2100: fuel maximization, emissions minimization, and land use minimization. The results are shown through six case studies: the United Kingdom, Japan, Australia, Kenya, Brazil, and the United States. Under the Intergovernmental Panel on Climate Change (IPCC) climate scenarios, the globally suitable land can meet and exceed the requirement for biomass cultivation for the aviation sector from the International Energy Agency (IEA). The demand for jet fuel in the U.S. can be fulfilled with 100% SAF, resulting in 21.3% emission savings if optimized for minimum emissions and assuming the use of energy crops. Incorporating lignocellulosic biomass could result in an additional 63.8% reduction in emissions. The study also shows that Japan and the United Kingdom have insufficient agricultural potential to meet their respective domestic SAF demands. In contrast, Australia, Kenya, Brazil, and the United States have agricultural potential that meet or exceed their relative SAF needs.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Technical and Policy Needs Analysis for Space Traffic Management of Low Lunar Orbit</title>
<link href="https://hdl.handle.net/1721.1/155401" rel="alternate"/>
<author>
<name>Kirkpatrick, Courtney R.</name>
</author>
<id>https://hdl.handle.net/1721.1/155401</id>
<updated>2024-06-28T03:21:28Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">A Technical and Policy Needs Analysis for Space Traffic Management of Low Lunar Orbit
Kirkpatrick, Courtney R.
The number of artificial objects in space has grown exponentially in the last decade, encouraging a greater focus on space safety and sustainability. Much of this focus is on the detection, tracking, cataloguing, and coordination of objects in space, also known as Space Traffic Management, which serves to prevent collisions in orbit. The cost of a collision in space is often very high--loss of mission, loss of societal support, or even loss of life. Beyond geosynchronous orbit, the Artemis mission brings a renewed excitement for lunar operations, and many countries plan to send missions to the moon in the coming decades. As this topic is quite future-looking, there are many gaps in research related to lunar Space Traffic Management. This thesis serves to begin filling these gaps by answering if Space Traffic Management will be necessary for low-altitude selenocentric orbits. This thesis analyzes the likelihood of collisions in Low Lunar Orbit using NASA's General Mission Analysis Tool and a GRAIL-based gravity model with 70 x 70 degree and order to propagate selenocentric orbits. These propagations are run using high performance computing through the MIT SuperCloud. Methods of preventing collisions are discussed with propagation analysis conducted. A discussion on recommendations on which satellites should maneuver if both have the capability is provided. Analysis found that impulsive burns are viable solutions to avoiding collisions. This thesis also serves to promote proactive development of a Space Traffic Management system for Low Lunar Orbit by discussing five main policy questions focused on the sustainability of Low Lunar Orbit. For each of these questions, the current solution used around Earth is given, followed by a discussion of the possible solutions that could be implemented in Low Lunar Orbit.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Iron Production by Molten Sulfide Electrolysis</title>
<link href="https://hdl.handle.net/1721.1/155400" rel="alternate"/>
<author>
<name>Suryarao, Kimaya P.</name>
</author>
<id>https://hdl.handle.net/1721.1/155400</id>
<updated>2024-06-28T03:21:10Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Iron Production by Molten Sulfide Electrolysis
Suryarao, Kimaya P.
With greater urgency to combat the detrimental effects of global warming, industries globally have pledged to reach net zero carbon emissions or become carbon neutral by 2050, the iron and steel industry included. With exponential increase in the production and the demand of steel, the carbon footprint of the industry has also been rising at a high rate, accounting ~ 10 -11% of the global carbon emissions. Present state-of-art steel production technologies have not been environmentally benign due to their inextricable dependence on carbon, making complete elimination of GHG emissions challenging. As renewable energy becomes a reality for industrial usage, efforts to decarbonize steel manufacturing motivate a key need to search for technologies solely using electricity for iron ore reduction. Herein, the electrolytic production of molten iron using a novel sulfide route, molten sulfide electrolysis (MSE) is investigated. Experimental evidence for electrolysis and the key attributes and underlying thermodynamics of MSE for iron production are investigated and discussed, along with sulfidation; the feedstock preparation step for the MSE process.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limitations of Commercial Aviation Safety Assessment Standards Uncovered in the Wake of the Boeing 737 MAX Accidents</title>
<link href="https://hdl.handle.net/1721.1/155396" rel="alternate"/>
<author>
<name>Lopes Rose, Rodrigo</name>
</author>
<id>https://hdl.handle.net/1721.1/155396</id>
<updated>2024-06-28T03:31:00Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Limitations of Commercial Aviation Safety Assessment Standards Uncovered in the Wake of the Boeing 737 MAX Accidents
Lopes Rose, Rodrigo
Commercial aviation accidents, though exceedingly rare, come at a large human, economic, and social cost. Therefore, different stakeholders in industry and government have collaborated to develop standard processes for developing aircraft and assessing their safety, the most popular being the Society of Automotive Engineers’ (SAE) Aerospace Recommended Practices (ARPs) 4754 and 4761. However, most of the engineering techniques used for aircraft development and safety assessment were developed in the mid-20th century and formalized into these standards in the 1990s. Modern aircraft often involve complex interactions between hardware, software, and humans, and the engineering techniques used to analyze these systems have not kept up with the pace of technological development. This thesis studies two recent accidents involving the Boeing 737 MAX (Lion Air flight JT610 and Ethiopian Airlines flight ET302) to identify the limitations that still exist in aviation safety assessment guidance that have contributed to these accidents. A new accident analysis methodology called Causal Analysis based on Systems Theory (CAST) was applied to the 737 MAX accidents to understand why the complex interactions leading to the accidents were not identified during the safety assessment process. The analysis uncovered four main limitations in safety assessment guidance that contributed to the accidents: (a) limited integration of human factors and safety, (b) limited guidance for identifying assumptions, (c) limited ability to capture non-failure based causal scenarios, and (d) limited ability to understand complex nonlinear causal relationships.  A new hazard analysis tool called System-Theoretic Process Analysis (STPA) was then applied to the same systems involved in the 737 MAX accidents to evaluate whether STPA can be used to address the identified limitations. STPA’s scenario-based framework that incorporates humans and software into the hazard analysis was found to support validation of human response assumptions, identification of new assumptions, assessing safety of intended behavior, and understanding circular causality or otherwise non-linear causal factors.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Assessment of Efficiency in Arbitrary Air-Breathing Power Systems</title>
<link href="https://hdl.handle.net/1721.1/155392" rel="alternate"/>
<author>
<name>Giroux, Wyatt</name>
</author>
<id>https://hdl.handle.net/1721.1/155392</id>
<updated>2024-06-28T03:05:26Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Modeling and Assessment of Efficiency in Arbitrary Air-Breathing Power Systems
Giroux, Wyatt
The push for net-zero carbon emissions in the aviation sector by 2050 has resulted in an increasing amount of work being done to analyze the benefits of emergent technologies. Aircraft propulsion systems are a common subject of such research, and studies of some proposed architectures, such as hybrid-electric powertrains, have suggested potential fuel-burn and nitrogen oxide emissions reductions of up to 10% and 4.9%, respectively. When attempting to refine and compare these systems, efficiency is a commonly used metric. Efficiency models provide an understanding of where and how energy is being dissipated in a given system, making them invaluable design and evaluation tools. Until recently, the traditional thermal/propulsive efficiency breakdown has been used to model gas-turbine engines. However, this model has two major deficiencies. First, the lack of a per-component efficiency model restricts understanding of system energy dissipation to either thermodynamic or propulsive losses. Second, the traditional model is unable to capture systems utilizing additional energy sources (batteries, fuel cells, etc.) and their respective conversion pathways. While individual studies have created efficiency models for unconventional systems, these models are either specific to a given architecture or are only applicable to a specific class of engines. This makes comparison between specific terms in existing efficiency models impossible.&#13;
&#13;
This thesis presents the Modular Efficiency Model (MEM), which is capable of constructing low-level efficiency models that accurately represent energy flow pathways and are algebraically consistent across arbitrary collections of propulsion system components. This is done by tracking the kinetic energy flow available for propulsion (expanded flow power) across each component in a system. MEM provides a more detailed breakdown of useful energy dissipation, relative influence of streams and components, and individual powertrain efficinecies that can be meaningfully compared to other systems. MEM is demonstrated in this work by comparing performance of unmixed, mixed-flow, and hybrid electric engine architectures. We identify high fan pressure ratio systems with low fan diameter as candidates for effective mixer use. For hybrid-electric systems, we find a 3.2% reduction in whole-mission fuel burn is possible at the cost of carrying only 50% of the original aircraft payload. Numerous detailed future studies utilizing MEM are recommended, using this thesis as a baseline example for the use of MEM in analyzing and comparing novel architectures.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of a new metric system to regulate NOₓ aircraft emissions at cruise</title>
<link href="https://hdl.handle.net/1721.1/155378" rel="alternate"/>
<author>
<name>Guenard, Adrien</name>
</author>
<id>https://hdl.handle.net/1721.1/155378</id>
<updated>2024-06-28T03:37:36Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Assessment of a new metric system to regulate NOₓ aircraft emissions at cruise
Guenard, Adrien
NOₓ emissions represent the largest source of air quality impacts attributed to aircraft. The largest part of these impacts are due to emissions during cruise where 90% of fuel burn occurs. NOₓ emissions cause an increase of surface PM2.5 and O3 concentrations that adversely affect human health. This public health consideration has motivated the International Civil Aviation (ICAO) Committe on Aviation and Environment Protection (CAEP) to set standards aimed at constraining NOₓ emissions. While the current Landing and Takeoff (LTO) regulation is designed to control emissions levels in the vicinity of the airport, emissions above 3,000 ft are not regulated yet. The LTO regulation is limited in its ability to constrain cruise NOₓ emissions. This observation motivated the investigation of new NOₓ metric systems focusing on cruise emissions. In this thesis, several NOₓ metrics candidates were defined. The ability of metrics to represent of cruise emissions was assessed quantitatively by computing the Pearson correlation coefficient between the candidate metrics and estimates of emissions from fleets of aircraft flying on real world routes computed using an aircraft emission inventory code. Based on correlation criteria, this thesis demonstrates that a new NOₓ regulation defined as a weighted sum of emissions indices at several intermediate static thrust points is able to better constrain cruise emissions than the current LTO regulation. Additionally, this new regulation will not necessitate a significant change in the emissions certification process. The focus of this thesis was to establish the metric value —quantity to be measured— within the regulation. The limit levels that are to be set on the metric value remain to be determined in order to comprehensively define the cruise NOₓ regulation.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Plasma-based CO₂ Conversion for Mars ISRU</title>
<link href="https://hdl.handle.net/1721.1/155377" rel="alternate"/>
<author>
<name>McKinney, Lanie G.</name>
</author>
<id>https://hdl.handle.net/1721.1/155377</id>
<updated>2024-06-28T03:10:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Plasma-based CO₂ Conversion for Mars ISRU
McKinney, Lanie G.
Plasma-based CO₂ conversion is a promising power-to-gas chemical synthesis process for Mars In-Situ Resource Utilization (ISRU). The abundant CO₂ in the Martian atmosphere can be converted into breathable oxygen and fuel for astronauts, enabling safer and more independent Mars missions while reducing launch costs. Nonthermal plasma technologies leverage electron excitation chemistry to achieve kinetic activation and split the stable bonds of CO₂ at modest temperatures and pressures compared to typical thermal conversion processes. Other benefits of Plasma-based conversion technologies include the compatibility with many feedstock gases, opening up possibilities for synthesizing other important chemicals in situ. Many plasma sources have been explored for CO₂ conversion, and an understanding of the fundamental atomic processes in CO₂ plasmas has led to validated chemical kinetic mechanisms. However, there have been limited parametric studies that directly compare the chemical performance of reactors under varied operating conditions. Understanding the coupled pressure, temperature, and reduced electric-field dependence of the relevant chemical processes’ will inform the system-level reactor design, including the pumps, heaters, and electronics required. This thesis describes a parametric exploration of a nanosecond repetitively pulsed plasma reactor under different operating conditions to compare reactor performance and elucidate the important kinetic effects. A 0-D chemical kinetic model is developed and described in detail, building upon previous work to ensure the mechanism is appropriate for the defined conditions. A tradespace is constructed in terms of important performance metrics such as conversion, efficiency, and specific energy input. To understand the primary kinetic pathways, a first-order sensitivity analysis is conducted on selected conditions. This work contributes a robust analysis of NRPD reactor performance to extend fundamental plasma studies for the engineering of a competitive technological candidate for Martian ISRU.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spacecraft Orbiting and Uncertainty - Planning Surveillance</title>
<link href="https://hdl.handle.net/1721.1/155368" rel="alternate"/>
<author>
<name>Nikolova, Joana N.</name>
</author>
<id>https://hdl.handle.net/1721.1/155368</id>
<updated>2024-06-28T03:22:07Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Spacecraft Orbiting and Uncertainty - Planning Surveillance
Nikolova, Joana N.
Scheduling of the Space Surveillance Network (SSN) is a crucial operation for the maintenance of safety and operations in Earth’s orbit. However, the capabilities of the SSN are limited and the number of objects that are being tracked is increasing with every year. This work proposes harnessing Imitation learning (IL) to develop explainable schedules without the development of subjective functions, but instead learning from approved schedules. To that end is proposed a graph structuring of the situation that allows learning from expert solutions. Importantly, this proposed framework also removes fragmentation and discretisation requirements within the time and space domains, requirements that are present in other solutions and lower the asymptotic efficiency that can be achieved. However, the models that were trained in this work did not achieve these goals and showed a very strong competition between the capability to choose the correct pass to observe an object and choosing the correct time within the pass. The trained models also showed a significant maintenance of performance of a trained model on data inputs outside of distribution. Overall, this thesis provides the necessary background to understand the principles of decision making for developing an SSN schedule, shows the set up of a graph structure for the basis of an IL algorithm for scheduling, and presents the results that have been obtained to this point.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strategic Planning and Deployment of Aerial Assets</title>
<link href="https://hdl.handle.net/1721.1/155367" rel="alternate"/>
<author>
<name>Saravanan, Akila</name>
</author>
<id>https://hdl.handle.net/1721.1/155367</id>
<updated>2024-06-28T03:14:25Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Strategic Planning and Deployment of Aerial Assets
Saravanan, Akila
The rapid deployment of fleets of small, uncrewed aircraft (drones) for tasks like package delivery or search-and-rescue in the immediate aftermath of a natural disaster are some of the most vital and common applications of advanced air mobility. Recognizing that successful drone missions depend on pre-established, well-positioned bases and efficient task allocation, this work presents a generalizable model for base positioning and routing in diverse applications. The proposed model prioritizes choosing bases that both maximize operational coverage and enable rapid responses to high-demand areas. Additionally, the framework integrates a vehicle routing component to optimize drone flight paths for efficient task completion in the tactical portion of drone-based operations; this component is the primary focus of this work. In addition to the theoretical formulation, the models are validated through case studies examining post-flooding search-and-rescue in the Iwate prefecture of Japan and package deliveries in the Austin, TX metropolitan area.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe and Efficient Motion Planning in Robotic Manipulation through Formal Methods</title>
<link href="https://hdl.handle.net/1721.1/155366" rel="alternate"/>
<author>
<name>Yu, Mingxin</name>
</author>
<id>https://hdl.handle.net/1721.1/155366</id>
<updated>2024-06-28T03:40:54Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Safe and Efficient Motion Planning in Robotic Manipulation through Formal Methods
Yu, Mingxin
Manipulating rigid body objects in crowded environments poses significant challenges due to the need for rapid, real-time planning and the assurance of safe operational paths.&#13;
The challenges come from varying shapes of the manipulated objects and high-dimensional nature of manipulators. &#13;
&#13;
This thesis addresses these issues by developing (1) a mixed-integer linear programming (MILP)-based approach to plan safe paths for rigid-body objects; and (2) a learned control barrier function (CBF) tailored for manipulators with multiple degrees of freedom (DoF) and an associated framework CBF-RRT to enable efficient planning for robotic manipulators. Comprehensive experimental results have shown that the proposed methods outperform baseline methods, providing tools for improving the safety and efficiency of robotic manipulators in complex environments.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low Earth Orbit Stability Analysis Using Monte-Carlo Techniques</title>
<link href="https://hdl.handle.net/1721.1/155365" rel="alternate"/>
<author>
<name>Appel, Grant F.</name>
</author>
<id>https://hdl.handle.net/1721.1/155365</id>
<updated>2024-06-28T04:04:17Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Low Earth Orbit Stability Analysis Using Monte-Carlo Techniques
Appel, Grant F.
Space domain awareness and the issue of space congestion have become critical topics, particularly with the proliferation of private companies launching large LEO constellations (LLCs), or mega-constellations, including SpaceX’s Starlink and Amazon’s Kuiper. This rapid expansion has led to increased concerns about space debris, potential collisions, and the possibility of reaching a critical threshold known as Kessler syndrome. To study and address these challenges, advanced modeling and simulation techniques are essential. There are largely two methods to for modelling and simulation: particle-in-box (PIB) methods and Monte Carlo techniques. Historically, the simulation tools developed have been closed source due to governments and companies wanting to maintain information security or ensure a profit. However, recently MIT’S ARCLab introduced MOCAT and MOCAT-MC, opensource toolboxes designed to propagate and model the LEO RSO population. This thesis focuses on MOCAT-MC- MIT’s Orbital Capacity Analysis Toolbox Monte Carlo. MOCATMC propagates individual space objects while accounting for various probabilistic factors common to LEO RSOs including mission failure, collision and space weather while open to new capabilities. Utilizing MOCAT-MC, this thesis presents population and density analyses which reveal exponential growth in object populations, particularly at higher altitudes of the LEO regime where Kessler’s critical density is projected to be exceeded. Collision analyses are also performed, highlighting an alarming increase in potential collisions. The results presented even impact active satellites capable of conducting collision avoidance maneuvers (CAMs). Additionally, a brief study on Anti-Satellite (ASAT) test implications reveals that a singular ASAT explosion contribute marginally to debris counts due to the existence of collisions from other sources. This thesis outlines a comprehensive approach to utilizing the MOCAT-MC toolbox and its data outputs in order to reveal some of its many capabilities in studying the LEO orbital population and stability. Overall, this research underscores the urgency of space domain awareness and sustainability. By leveraging MOCAT-MC, the paper provides quantitative insights into LEO object density trends, collision probabilities, and ASAT implications. The findings highlight the escalating risks in space operations and emphasize the need for proactive measures to mitigate space congestion and ensure long-term space sustainability.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidating Dual Methylcellulose-and-Oil-Nanoemulsion Thermoresponsive Gelation</title>
<link href="https://hdl.handle.net/1721.1/155361" rel="alternate"/>
<author>
<name>Wojtaszek, Mateusz M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155361</id>
<updated>2024-06-28T03:49:03Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Elucidating Dual Methylcellulose-and-Oil-Nanoemulsion Thermoresponsive Gelation
Wojtaszek, Mateusz M.
The rheological properties of a colloidal gel depend upon the microstructure of the gel and the identity of the load-bearing elements. Here we demonstrate a hybrid hydrogel-colloidal gel system composed of a methylcellulose-stabilized oil nanoemulsion. This system has tunable rheology with two distinct dominant load-bearing components. Oil volume fraction determines which component leads to elasticity in the gel network. At low oil volume fraction, methylcellulose forms a fibrillar gel upon an increase in temperature. As oil volume fraction increases, methylcellulose is sequestered onto the droplet surfaces, decreasing the concentration of methylcellulose available for polymer gel formation and weakening the gel structure. Upon further increase in oil volume fraction, we hypothesize that an oil droplet network becomes the primary load bearing structure, resulting in marked differences in rheology. This represents a unique system in which two gelation regimes with distinct identity and behavior are tuned using only nanoemulsion volume fraction. This behavior is made possible by the unique fact that the component which stabilizes the nanoemulsion, methylcellulose, is also active in the gel itself. Due to the components used, this system has potential uses in applications such as pharmaceuticals and food products.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Charting A Course Through Uncharted Terrain: Seeking Opportunities in the U.S. Real Estate Private Debt During Challenging Times</title>
<link href="https://hdl.handle.net/1721.1/155360" rel="alternate"/>
<author>
<name>Wang, Shao Lan</name>
</author>
<id>https://hdl.handle.net/1721.1/155360</id>
<updated>2024-06-28T03:34:50Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Charting A Course Through Uncharted Terrain: Seeking Opportunities in the U.S. Real Estate Private Debt During Challenging Times
Wang, Shao Lan
The current macroeconomic context is distinguished by an upsurge in private credit lending activities. This thesis delves into the realm of the real estate debt market, primarily within the United States, aiming to identify principal participants and ascertain the alignment of investment strategies. It also scrutinizes how investors are managing and exploiting opportunities during this phase of uncertainty and challenging market conditions. The ultimate goal is to gain insights into the state of the real estate market during this specific time period and understand the thought processes of investors. It will offer them a perspective on the fundamental reasoning behind real estate debt investment, elucidating why at this juncture, it has become a focal point of discussion in the real estate sector and why debt has garnered such significant attention.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recycling of Rare Earth Magnets with Sulfur Based Chemistries and High Temperature Processing</title>
<link href="https://hdl.handle.net/1721.1/155352" rel="alternate"/>
<author>
<name>Adams, Zachary Kenneth</name>
</author>
<id>https://hdl.handle.net/1721.1/155352</id>
<updated>2024-06-28T03:29:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Recycling of Rare Earth Magnets with Sulfur Based Chemistries and High Temperature Processing
Adams, Zachary Kenneth
Rare-earth(RE)-iron-boron permanent magnets are among the strongest permanent magnets available and power essential technologies, from wind turbines to hard disk drives. The production of the rare earth metal for these magnets currently involves significant greenhouse gas emissions and other environmental impacts. Additionally, the production of these metals is geographically complicated, as over 95% of rare earth metals are produced in China, which leads to supply-chain concerns and price fluctuations. Recycling of the rare earth elements is imperative to decrease net emissions and for the sustainability of RE-based magnets, but current magnet recycling is limited. In this work, sulfidation is investigated in the context of RE separation and recovery from RE-based magnets. Evidence of rare-earth separation and selectivity are presented, with insights into the underlying sulfidation mechanism involved for actual magnet processing.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Identification CFD-Based Reduced-Order Modeling for Hypersonic Vehicles</title>
<link href="https://hdl.handle.net/1721.1/155348" rel="alternate"/>
<author>
<name>Middleton, Kendra Lynn</name>
</author>
<id>https://hdl.handle.net/1721.1/155348</id>
<updated>2024-06-28T03:28:56Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">System Identification CFD-Based Reduced-Order Modeling for Hypersonic Vehicles
Middleton, Kendra Lynn
System identification (SID) techniques were utilized to assemble reduced-order models purposed for estimating the aerodynamic coefficients of a hypersonic vehicle subjected to flight conditions of interest. The reduced-order models combined the accuracy of high-fidelity hybrid Reynolds Averaged Navier-Stokes (RANS) and Large Eddy Simulation (LES) computational fluid dynamic (CFD) models with the computing speed of low-fidelity inviscid CFD models, efficiently capturing the effects of complex physics in a timely manner. The vehicle geometry utilized for this study was the High-Speed Army Reference Vehicle (HARV), which was simulated in training maneuver motions solved by HPCMP CREATETM-AV Kestrel, the high-fidelity CFD software. The resulting data was used and assessed in its information supply to the SID techniques, which were also performed in Kestrel as a post-processing operation. Many SID models with varying structures were built with the training maneuver data. The models were validated using a variety of different dynamic maneuvers and static configurations in an effort to understand the limits and capabilities of hypersonic SID modeling. The results suggested insufficient low-rate data information in the training maneuver hampered the SID model prediction accuracies the most. A single trajectory analysis revealed that simulation results using SID model prediction aerodynamic databases and using low-fidelity CFD model prediction databases did not drastically differ. Once constructed, the SID model expressed the capacity to predict much more complex databases in significantly less time. This emphasized substantial benefits toward utilizing SID reduced-order models in the design phase of hypersonic vehicles.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Safe Nonlinear Control Under Control Constraints via Reachability, Optimal Control and Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/155344" rel="alternate"/>
<author>
<name>So, Oswin</name>
</author>
<id>https://hdl.handle.net/1721.1/155344</id>
<updated>2024-06-28T03:04:13Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Safe Nonlinear Control Under Control Constraints via Reachability, Optimal Control and Reinforcement Learning
So, Oswin
Autonomous robots in the real world have nonlinear dynamics with actuators that are subject to constraints. The combination of the two poses complicates the task of designing stabilizing controllers that can guarantee safety, which we denote as the stabilize-avoid problem. Existing control-based techniques can provide safety and stability guarantees but under the assumption of unbounded control inputs. On the other hand, learning-based techniques can handle control constraints but often are unable to correctly trade-off between safety and stability.&#13;
&#13;
In this thesis, we take a step towards synthesizing controllers with improved safety and stability for high dimensional nonlinear systems with control constraints by combining techniques from reachability, optimal control, and reinforcement learning. We first propose a novel approach to solve constrained optimal control problems using deep reinforcement learning by using techniques from traditional constrained optimization, enabling the solution of stabilize-avoid problems for high-dimensional nonlinear systems with control constraints. Next, we present an alternate method of solving the stabilize-avoid problem using control barrier functions,&#13;
where we present an improved method for learning control barrier functions for nonlinear systems with control constraints by drawing on connections between reachability and deep reinforcement learning. &#13;
&#13;
We validate our proposed methods on a variety of benchmark tasks. Our experiments demonstrate the advantage of our methods over existing techniques in terms of improved safety rates and larger regions of attraction, especially in the case of high-dimensional systems.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Systems Theoretic Process Analysis as a Practical Tool for Comprehensive Flight Test Hazard Identification</title>
<link href="https://hdl.handle.net/1721.1/155341" rel="alternate"/>
<author>
<name>Eisen, Noam D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155341</id>
<updated>2024-06-28T03:04:15Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Systems Theoretic Process Analysis as a Practical Tool for Comprehensive Flight Test Hazard Identification
Eisen, Noam D.
Flight test is an endeavor inherently imbued with risk. In order to conduct flight testing safely, hazards of consequence must be identified and mitigated in advance of testing. While adequate practices are widely in place for the mitigation of hazards that have been identified, the practices generally used to reveal and identify hazards in the first place rely on brain- storming and other fragmentary methods that can leave critical gaps in safety preparedness. Mainstream flight test risk management techniques such as Test Hazard Analysis (THA) rely on expert brainstorming for the identification of hazards, and lean heavily on experience and lessons learned from subjectively ‘similar’ past test programs. Frequently for a given new program, the THA report from a past program is simply duplicated in full, with edits then made to accommodate perceived differences. Such processes have left critical gaps in hazard identification coverage even where ‘similar’ technologies and test methods are concerned; moreover, as airborne technologies evolve– with increasingly complex systems interactions, software, and human/machine interplays– the gaps in hazard coverage are becoming ever more pronounced, leaving the legacy risk management techniques unable to support a level of safety that meets industry needs. With each hazard in a THA documented separately, and mitigations addressed individually to each hazard, no underlying framework is available to unify hazard identification or analysis across functionalities or disciplines. Safety reviews and preflight briefings based on THA become lengthy and disjoint, as well as potentially incomplete. Systems Theoretic Process Analysis (STPA) is a forward-looking safety analysis methodology grounded in systems theory. Based in the System-Theoretic Accident Model and Processes (STAMP) model, STPA is able to produce meaningful results even where other methodologies struggle, such as in systems involving software, human interactions, or other forms of complexity such as exist in aviation and flight test. This thesis proposes to apply STPA to the problem of hazard identification and management in flight test, specifically focusing on piloted (‘manned’) aircraft. The state of the art in THA is examined, and STPA and THA are compared in frameworks, constructs, and work products in the context of flight test. STPA is applied to an example flight test campaign to illustrate its use in test hazard identification. A final section describes more broadly how STPA could be incorporated into flight test organizations now, and in a future where STPA is more widely used by design and engineering departments as well.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Production of Bio-based Lactone Monomers for Intrinsically Recyclable Plastics</title>
<link href="https://hdl.handle.net/1721.1/155329" rel="alternate"/>
<author>
<name>Baston, Lucas A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155329</id>
<updated>2024-06-28T03:50:04Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Production of Bio-based Lactone Monomers for Intrinsically Recyclable Plastics
Baston, Lucas A.
The development of intrinsically recyclable plastics is crucial to halt the accumulation of waste plastics in the environment. While great strides have been made in the design of novel polymers that exhibit desirable qualities and degrade back to their respective monomer at mild conditions, the development of scalable synthesis of the monomers for these plastics lags behind. This work aims to develop methods for the synthesis of monomers using heterogeneous catalysts to allow for scaling-up. First, we used a high-throughput computational method to screen binding energies of key reaction species of more than 200 zeolite frameworks to identify potential catalysts that would selectively catalyze our probe reaction of methyl lactate lactonization to lactide. From these computations, we identified titanium-containing zeolite with the MEL topology as a promising catalyst for this reaction in the gas phase. Continuous-flow kinetic studies revealed that Ti-MEL showed 40% increased selectivity to the lactide product at over twice the conversion as Ti-BEA and Ti-MFI. Second, we show a potential pathway for the production of α-cyclohexyl-δ-valerolactone (CVL) starting from formaldehyde and δ-valerolactone (DVL). We developed a continuous gas-phase reactor using alkaline earth oxides supported on silica as catalysts for an aldol condensation reaction. CaO and BaO showed 90% and 83% selectivity, respectively, to α-methylene-δ-valerolactone at 60% DVL conversion. Following this, MVL was functionalized with 1,3-butadiene in a Diels-Alder addition to form the unsaturated form of our desired CVL monomer (CeVL). This reaction was catalyzed over Lewis acid with selectivities reaching 90% of Sn-BEA catalysts at mild temperatures of 55 °C. Finally, the CeVL was able to be hydrogenated to CVL over commercially available palladium on carbon catalysts with flowing hydrogen.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Surface Curvature and Roughness Effects on Görtler Vortex Development in Hypersonic Flow</title>
<link href="https://hdl.handle.net/1721.1/155316" rel="alternate"/>
<author>
<name>Smith, Shannon C.</name>
</author>
<id>https://hdl.handle.net/1721.1/155316</id>
<updated>2024-06-28T03:52:42Z</updated>
<published>2024-05-01T00:00:00Z</published>
<summary type="text">Surface Curvature and Roughness Effects on Görtler Vortex Development in Hypersonic Flow
Smith, Shannon C.
This work presents a computational and experimental investigation of surface roughness and concave curvature as control parameters on the development of Görtler vortices in hypersonic flow. Three-dimensional large eddy simulation (LES) was performed for two curvature cases using US3D, an unstructured-grid finite volume computational fluid dynamics (CFD) solver. Experiments were performed on two curvature cases and three roughness element shapes at the University of Texas at San Antonio (UTSA) Mach 7 Ludwieg tube wind tunnel facility. The goal of these studies was to examine how variations in surface roughness and curvature affect the downstream development and transition characteristics of the hypersonic boundary layer formed over concave models. It also serves to extend previous work on the effect of shaped surface roughness on the Görtler instability to the hypersonic regime. The included results demonstrate key features of the relationship between roughness effects, vortex development, and boundary layer transition in hypersonic flows dominated by the Görtler instability that can inform both engineering design and future research.
</summary>
<dc:date>2024-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of factors affecting the cooling load for air conditioning</title>
<link href="https://hdl.handle.net/1721.1/155259" rel="alternate"/>
<author>
<name>Bates, Maurice Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/155259</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">A study of factors affecting the cooling load for air conditioning
Bates, Maurice Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1935; Includes bibliographical references (leaves xi-xiii).
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High pressure rectification</title>
<link href="https://hdl.handle.net/1721.1/155258" rel="alternate"/>
<author>
<name>Sundstrom, Warren E.
            (Warren Eric)</name>
</author>
<id>https://hdl.handle.net/1721.1/155258</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">High pressure rectification
Sundstrom, Warren E.
            (Warren Eric)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 32).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A portable analog-to-digital converter for the recording of sound surveys</title>
<link href="https://hdl.handle.net/1721.1/155249" rel="alternate"/>
<author>
<name>Bell, Chester Gordon.</name>
</author>
<id>https://hdl.handle.net/1721.1/155249</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1957-01-01T00:00:00Z</published>
<summary type="text">A portable analog-to-digital converter for the recording of sound surveys
Bell, Chester Gordon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1957; Bibliography: leaves 59-60.
</summary>
<dc:date>1957-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A design for steel mitering lock gate</title>
<link href="https://hdl.handle.net/1721.1/155247" rel="alternate"/>
<author>
<name>Van, Yung Tsun.</name>
</author>
<id>https://hdl.handle.net/1721.1/155247</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1914-01-01T00:00:00Z</published>
<summary type="text">A design for steel mitering lock gate
Van, Yung Tsun.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1914; Includes bibliographical references.
</summary>
<dc:date>1914-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat transfer coefficients in a falling film condenser</title>
<link href="https://hdl.handle.net/1721.1/155242" rel="alternate"/>
<author>
<name>Bays, George Samuel.</name>
</author>
<author>
<name>Blenderman, Louis Morrall.</name>
</author>
<id>https://hdl.handle.net/1721.1/155242</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">Heat transfer coefficients in a falling film condenser
Bays, George Samuel.; Blenderman, Louis Morrall.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1935; Appendix contains numerous pamphlets.; Includes bibliographical references (leaf 97).
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A criminal courts, prison and hospital building</title>
<link href="https://hdl.handle.net/1721.1/155240" rel="alternate"/>
<author>
<name>Bartos, Armand Philip.</name>
</author>
<id>https://hdl.handle.net/1721.1/155240</id>
<updated>2024-06-12T06:07:43Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">A criminal courts, prison and hospital building
Bartos, Armand Philip.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1935
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wave length measurements in the spectrum of the neodymium arc</title>
<link href="https://hdl.handle.net/1721.1/155239" rel="alternate"/>
<author>
<name>Bartlett, William Walker,
            1912-</name>
</author>
<id>https://hdl.handle.net/1721.1/155239</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">Wave length measurements in the spectrum of the neodymium arc
Bartlett, William Walker,
            1912-
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1935
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spectrographic analysis of grain boundary segregates in cast monel metal</title>
<link href="https://hdl.handle.net/1721.1/155238" rel="alternate"/>
<author>
<name>Barclay, John A.</name>
</author>
<id>https://hdl.handle.net/1721.1/155238</id>
<updated>2025-10-30T17:03:41Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Spectrographic analysis of grain boundary segregates in cast monel metal
Barclay, John A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1936; Includes bibliographical references (leaf 43).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perturbation theory in quantum mechanics</title>
<link href="https://hdl.handle.net/1721.1/155231" rel="alternate"/>
<author>
<name>Haas, Violet B.,
            1921-</name>
</author>
<id>https://hdl.handle.net/1721.1/155231</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">Perturbation theory in quantum mechanics
Haas, Violet B.,
            1921-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1949; Bibliography: leaf [35].
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distribution and behavior of trace metals in the subterranean estuary of an Arctic coastal lagoon</title>
<link href="https://hdl.handle.net/1721.1/155067" rel="alternate"/>
<author>
<name>Schaal, Isabel Vicenta</name>
</author>
<id>https://hdl.handle.net/1721.1/155067</id>
<updated>2024-05-25T03:36:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Distribution and behavior of trace metals in the subterranean estuary of an Arctic coastal lagoon
Schaal, Isabel Vicenta
Subterranean estuaries (STEs) can be an important location for biogeochemical reactions that may alter concentrations of chemical constituents of groundwater. With warming in the Arctic and the subsequent permafrost thaw, the relative importance of submarine groundwater discharge (SGD) to ocean chemical budgets will grow. In this study, we examined the distribution of select trace metals (Fe, Mn, V, U, Mo and Ba) in the STE, lagoon surface waters, and coastal sediments of Simpson Lagoon along the Beaufort Shelf of Alaska. This location is unique among studies as the STE consists of organic-rich sediments. Samples were collected over two years and throughout seasonal water conditions, including the melting, open-water, and freeze-up periods. Fe, Mn, V, and Ba mainly exhibited non-conservative additions within the estuary, with Fe concentrations being some of the highest among groundwater studies. U exhibited both non-conservative removal and addition in the estuary, and Mo exhibited mainly removal. In the lagoon, non-conservative addition of U allowed for the calculation of an SGD flux. This flux, along with a Ra-derived flux, was used to estimate metal fluxes into the lagoon. Fluxes for all metals were similar to or greater than river flux estimates in all months except for June, when SGD was likely nonexistent. These fluxes can be used to assess SGD impact on the coastal Arctic; however, for reactive metals, processes in the lagoon may continue to alter metal concentrations before mixing with the greater Arctic Ocean. This study provides some of the first estimates of trace metal concentrations and fluxes within Arctic subterranean estuaries and exhibits the importance of considering SGD when assessing metal input to the coastal Arctic.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bleeding Details</title>
<link href="https://hdl.handle.net/1721.1/155066" rel="alternate"/>
<author>
<name>Mohan, Sahil</name>
</author>
<id>https://hdl.handle.net/1721.1/155066</id>
<updated>2024-05-25T03:28:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bleeding Details
Mohan, Sahil
This thesis begins at my Nani Ji’s house. Movies depicting Hindu mythology played in the background of our family gatherings- movies where Hanuman would grow to the size of a mountain or Shiva would morph between genders. These shifts between scale, gender, and material affirmed the queerness I had yet to find words for. They taught me that boundaries expand and contract. Everything was interconnected. I could be a mountain.  This preoccupation with Hindu Gods led me to their home: Mount Meru. Hindu, Jain, and Buddhist cosmologies consider this sacred five-peaked mountain to be the center of all physical. Metaphysical, and spiritual universes among other centers. Religious anecdotes imply that the Hindu-Kush Himalayan Ice Sheet is this focal point. And the Ice Sheet sustains a complex history: a history of water in its many forms, a history of religious diversity and spiritual importance, a history of war and boundaries.  Boundaries drawn like a line.  And so too, architecture continues to occupy itself with the line. It considers and abstracts an ideal future by drawing precise lines that separate buildings from environment. This abstraction may be necessary for the field, but it leads to a fixation on strategies of centering segregation, precision, and predictability. Drawings have become a passive instrument of information. They imply an impossible neutrality which produces objects that endure, rather than bodies that engage their contexts.  But a world assembled by determined moments and perfectly fixing parts could harbor no life. Nothing could move or become. What if the methods of architecture reflected the flow of water or the fluidity of human embodiment? This thesis is as much a question as it is an answer. Can architecture cross and blur boundaries and binaries: queer and heteronormative, land and water, human and nature? When and how would it all dissolve? What happens to architecture when the details bleed?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Stakeholder-informed Evaluation of Global Climate Temperature Response Functions</title>
<link href="https://hdl.handle.net/1721.1/155065" rel="alternate"/>
<author>
<name>Womack, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/155065</id>
<updated>2024-05-25T03:13:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Development and Stakeholder-informed Evaluation of Global Climate Temperature Response Functions
Womack, Christopher
Modern climate models allow for accurate simulation over a range of future climate scenarios. However, there exists a significant gap in terms of speed, accuracy, and overall intuitiveness between the stateof-the-art and more generally accessible tools, especially in tools used for climate education. Climate emulators provide a potential closure for this gap, and a significant body of work has shown their efficacy in providing a relatively lightweight method to reproduce the results of full-scale Earth System Models. In this thesis, I demonstrate a novel methodology for climate emulation based on the response of the climate system to effective radiative forcing (ERF). While previous work has demonstrated the efficacy of impulse response functions as a tool for climate emulation, critically, these methods are largely nongeneralizable to new scenarios and are inaccessible to more general audiences. To remedy this, we present a general framework for integrating stakeholder analysis into the model development process to ensure all key stakeholder needs are identified and met at each step of development. This framework is then applied in the context of climate emulator development, showcasing how this integrated stakeholder analysis is able to increase emulator salience, credibility, and legitimacy for our target audience. We present results from an application to near-surface air temperature based on ERF and temperature data taken from experiments in the sixth phase of the Coupled Model Intercomparison Project (CMIP6). We evaluate the emulator using additional experiments taken from the CMIP6 archive, including the Shared Socioeconomic Pathways (SSPs), demonstrating accurate emulation of global mean and spatially resolved temperature change with respect to the outputs of the CMIP6 ensemble. Global absolute error in predicted temperature averages 0.25◦C with a bias ranging from-0.14 to-0.04◦C. In addition, the comprehensive stakeholder analysis performed as a part of the development process affords the emulator ease of use and interpretability in its outputs while meeting all key stakeholder requirements. While it is unable to capture state-dependent climate feedbacks, such as the non-linear effects of Arctic sea ice melt in high-warming scenarios, our results show that the emulator is generalizable to any scenario independent of the specific forcings present.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Reinforcement Learning Algorithms for Nuclear Power Plant Fuel Optimization</title>
<link href="https://hdl.handle.net/1721.1/155061" rel="alternate"/>
<author>
<name>Seurin, Paul R.M.</name>
</author>
<id>https://hdl.handle.net/1721.1/155061</id>
<updated>2024-05-25T03:01:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessment of Reinforcement Learning Algorithms for Nuclear Power Plant Fuel Optimization
Seurin, Paul R.M.
The nuclear fuel loading pattern optimization problem belongs to the class of large-scale combinatorial optimization and has been studied since the dawn of the commercial nuclear energy industry. It is also characterized by multiple objectives and constraints, which makes it impossible to solve explicitly. Stochastic optimization methodologies including Genetic Algorithms and Simulated Annealing are used by different nuclear utilities and vendors to perform fuel cycle reload design. Nevertheless, hand-designed solutions continue to be the prevalent method in the industry. To improve the state-of-the-art core reload patterns, we aim to create a method as scalable as possible, that agrees with the designer’s goal of performance and safety. To help in this task Deep Reinforcement Learning (RL), in particular, Proximal Policy Optimization is leveraged. RL has recently experienced a strong impetus from its successes applied to games, sometimes even reaching ”super-human” performances. This thesis presents a first-of-a-kind approach to utilize deep RL to solve the loading pattern problem and could be leveraged for any engi3neering design optimization with an integer or combinatorial input structure. This work is also to our knowledge the first to propose a study of the behavior of several hyper-parameters that influence the RL algorithm via a multi-measure approach helped with statistical tests. To demonstrate its superiority against industry-preferred computational methods, we compared its performance against the most adopted legacy Stochastic Optimization (SO)-based approaches in the literature and the industry namely, Parallel Simulated Annealing with Mixing of States (PSA), Genetic Algorithm (GA), and a novel first-of-a-kind parallel Tabu Search (TS) we developed for this effect. For this purpose, the full software development from scratch was done to enable the application of RL and SO-based algorithms optimization with SIMULATE3 and visualization of the results. The algorithm is highly dependent on multiple factors such as the shape of the objective function derived for the core design that behaves as a fudge factor that affects the stability of the learning. But also an exploration/exploitation trade-off that manifests through different parameters such as the number of loading patterns seen by the agents per episode, the number of samples collected before a policy update nsteps , and an entropy factor ent_coef that increases the randomness of the policy during training. We found that RL must be applied similarly to a Gaussian Process in which the acquisition function is replaced by a parametrized policy: in essence, a policy generates solutions, while a critic learns and evaluates the quality of these solutions. Then, once an initial set of hyper-parameters is found, reducing nsteps and ent_coef until no more learning is observed or instabilities occur will result in the highest sample efficiency robustly and stably. Applying this approach resulted in an economic benefit on average of 540,000 and 650,000 $/year/plant for a 1000 MWe and 1200 MWe Nuclear Power Plant, respectively. 4Extending this approach to eleven classical benchmarks, we demonstrated that the methodology developed in this work is problem agnostic and can be seamlessly leveraged to use RL as an optimization tool elsewhere for problems with an integer or combinatorial input space. Although we had not demonstrated it on the nuclear power plant fuel optimization problem, the initialization of the state at the beginning of an episode was also investigated with the benchmarks. We established that initializing the episode with the state of the best ever solution found might be more suitable for problem with complicated reward functions, which is the case for our problem and aligns with the way core designers operates by iterating on the best solution found. We suggest, however, to compare with initializing with random state instances on a case by case basis, hence we have not included this observation as an essential element of the approach. We also showed that, by learning which solution to generate next intrinsically while marching down the objective space (in contrast to SO-based, which are doing it randomly), the use of RL resulted in an algorithm that found solutions of greater quality systematically but also faster than legacy approaches. This opens the curtains to a new optimization paradigm that could result in significant contributions in engineering fields beyond loading pattern optimization, especially when an expensive physic-solver is required. Additional key observations include: (1) The RL algorithms cannot be applied without physic-based intuitions provided during the search. This intuition can be built up in the construction of the action space (e.g., through pre-defined templates) and the reward signal. (2) Defining the frame of your optimization (e.g., here the necessity to obtain results within a day), the shape of the reward (e.g., magnitude and curvature), and understanding the degree of exploration/exploitation needed in your problem influences the value of the 5hyper-parameters chosen. (3) RL algorithms are highly sensitive to these hyper-parameters but there is an approach (presented here) for gaining sample efficiency by playing with the exploration/exploitation trade-offs. (4) Because eventually, we are aiming at improving the economy of the Nuclear Power Plants, utilizing the Levelized Cost of Electricity (LCOE) to rigorously assess the true economic performance of the different algorithm configurations was pivotal to measure the true importance of hyper-parameter tuning and the superiority of RL over legacy approaches. Overall, the methodology developed in this research supports four important new capabilities for core designers: (1) accelerate the design of new reactors by proposing efficient solutions within a reasonable amount of time, (2) ensure feasibility and quality of the resulting design, limiting the overhead time allocated to re-design (3) propose a new set of computational methodologies more robust and stable than classical SO-based ones to result in higher economic gains for the existing fleet of operating reactors, and (4) propose a tool that could be leveraged in the future to gain managerial insights about strategies to optimize the loading pattern optimization problems beyond expert know-how. Keywords— Fuel loading pattern, Optimization, Reinforcement Learning, Proximal Policy Optimization
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Cognitive Reflection from Digital Fingerprints</title>
<link href="https://hdl.handle.net/1721.1/155059" rel="alternate"/>
<author>
<name>Jimenez, An</name>
</author>
<id>https://hdl.handle.net/1721.1/155059</id>
<updated>2024-05-25T03:35:20Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Predicting Cognitive Reflection from Digital Fingerprints
Jimenez, An
While social media is beneficial in facilitating social connections and spreading knowledge on a large scale, its negative impacts — the propagation of misinformation through networks and the emergence of echo chambers in particular — are con- sequential and dangerous, inducing a more divergent rather than cohesive society. What cognitive mechanisms are at play when users decide what to share and who to follow on social media? A recent study provides evidence that users with higher Cognitive Reflection Test (CRT) scores — a popular measure for reflective thinking — are more discerning in their Twitter behavior (Mosleh et al., 2021). While previous research sheds light on this relationship between cognitive reflection and Twitter behavior, there is an opportunity to generalize these correlations to larger populations and across different social media platforms by building a computational model to predict cognitive reflection from social media activity, which is the focus of my project. Applying machine learning techniques to the dataset used in Mosleh’s study, I created a model that predicts CRT scores from Twitter features such as Tweet content and accounts followed (followees) and also determined which features and combinations of features are most predictive of cognitive reflection. Correlations between predicted and actual CRT scores are strongest when predicting with information related to followees (&#119903; = 0.25) and followee bios (&#119903; = 0.24). Combining followee features and applying different regression models improves prediction accuracy (&#119903; = 0.29). These conclusions help form a more complete picture of how cognitive reflection relates to social media activity, which has important implications for how we can encourage more intentional social media use and ultimately, reconnect divisive populations online.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Process Optimization and Cost Analysis of Electrochemical&#13;
Micromachining for Volume Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/155058" rel="alternate"/>
<author>
<name>Li, Mingyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/155058</id>
<updated>2024-05-25T03:52:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Process Optimization and Cost Analysis of Electrochemical&#13;
Micromachining for Volume Manufacturing
Li, Mingyuan
Microfluidic devices have found numerous applications in the medical device, pharmaceutical, and healthcare industries, resulting in an increasing demand for different types of microfluidic devices in various designs and materials, including metals such as stainless steel and titanium. Electrochemical micromachining (ECMM) is a powerful method for manufacturing small channels (~0.1mm) on metal substrates which can be later made into metal microfluidic devices, but so far, it has only been studied in benchtop experiments. Scaling up this process to volume manufacturing is relatively unexplored. In this study, we examine the financial and performance benefits of ECMM in an industrial setting for manufacturing microfluidic devices. We conduct cost analysis and performance comparisons of ECMM and micro-milling, an alternative technology for making micro channels. Our findings demonstrate that channels manufactured using ECMM have less variation in the total volume of material removed when compared to micro-milling. However, the cost of ECMM is currently around 50% higher than micro-milling for the fluidic device analyzed here. By making a few simple design changes and optimizing the ECMM process, we will be able to achieve a &gt;20% cost saving compared to micro-milling. The second part of our study focuses on optimizing the ECMM process in terms of cycle time. The bottleneck for the entire process is the time for photoresist removal. By changing the solvent, agitation method, and hard baking time, we reduce the stripping time from hours or even days to just ~60 minutes, with a standard deviation of ~2.7 minutes, drastically reducing the mean and variation. Furthermore, our investigation finds a correlation between surface roughness and stripping time, which should be further controlled in the manufacturing process in the future.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A More Holistic Analysis of Privacy Risks in Transcriptomic Datasets</title>
<link href="https://hdl.handle.net/1721.1/155055" rel="alternate"/>
<author>
<name>Sadhuka, Shuvom</name>
</author>
<id>https://hdl.handle.net/1721.1/155055</id>
<updated>2024-05-25T03:28:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A More Holistic Analysis of Privacy Risks in Transcriptomic Datasets
Sadhuka, Shuvom
Gene expression data provides molecular insights into the functional impact of genetic variation, for example through expression quantitative trait loci (eQTL). With an improving understanding of the association between genotypes and gene expression comes a greater concern that gene expression profiles could be matched to genotype profiles of the same individuals in another dataset, known as a linking attack. Prior work demonstrating such a risk could analyze only a fraction of eQTLs that are independent of each other due to restrictive model assumptions, leaving the full extent of this risk incompletely understood. To address this challenge, we introduce discriminative sequence model (DSM), a novel probabilistic framework for predicting a sequence of genotypes based on gene expression data. By modeling the joint distribution over all variants in a genomic region, DSM enables an accurate assessment of the power of linking attacks that leverage all known eQTLs with necessary calibration for linkage disequilibrium and redundant predictive signals. We demonstrate improved linking accuracy of DSM compared to two existing approaches on a range of real datasets including up to 22K individuals, suggesting that DSM helps uncover a substantial additional risk overlooked by previous studies. Our work provides a unified framework for assessing the privacy risks of sharing diverse omics datasets beyond transcriptomics.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The electrical strength of insulators in high vacua</title>
<link href="https://hdl.handle.net/1721.1/155035" rel="alternate"/>
<author>
<name>Backenstoss, Henry B.</name>
</author>
<id>https://hdl.handle.net/1721.1/155035</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1935-01-01T00:00:00Z</published>
<summary type="text">The electrical strength of insulators in high vacua
Backenstoss, Henry B.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1935; Includes bibliographical references (leaf 46).
</summary>
<dc:date>1935-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Differentiation between sulfur and phosphorous in Baumann printing</title>
<link href="https://hdl.handle.net/1721.1/155033" rel="alternate"/>
<author>
<name>Skidmore, Wilbur M.
            (Wilbur Manly)</name>
</author>
<id>https://hdl.handle.net/1721.1/155033</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Differentiation between sulfur and phosphorous in Baumann printing
Skidmore, Wilbur M.
            (Wilbur Manly)
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1936; Includes bibliographical references (leaf 43).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some of the technical and economic problems of central station energy storage</title>
<link href="https://hdl.handle.net/1721.1/155032" rel="alternate"/>
<author>
<name>Sloan, Royal D.</name>
</author>
<id>https://hdl.handle.net/1721.1/155032</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Some of the technical and economic problems of central station energy storage
Sloan, Royal D.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1936; Includes bibliographical references (leaves 70-73).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Removal of arsenic from hot sulfur dioxide gas</title>
<link href="https://hdl.handle.net/1721.1/155031" rel="alternate"/>
<author>
<name>Smith, Charles W.
            (Charles William)</name>
</author>
<id>https://hdl.handle.net/1721.1/155031</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Removal of arsenic from hot sulfur dioxide gas
Smith, Charles W.
            (Charles William)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 63).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A method of direct manufacture of hydrochloric acid solution</title>
<link href="https://hdl.handle.net/1721.1/155029" rel="alternate"/>
<author>
<name>Smith, Laxton M.
            (Laxton Montgomery)</name>
</author>
<id>https://hdl.handle.net/1721.1/155029</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">A method of direct manufacture of hydrochloric acid solution
Smith, Laxton M.
            (Laxton Montgomery)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1936; Includes bibliographical references (leaf 50).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximity effect in spot welding of stainless steel</title>
<link href="https://hdl.handle.net/1721.1/155028" rel="alternate"/>
<author>
<name>Sweeney, James Augustus.</name>
</author>
<id>https://hdl.handle.net/1721.1/155028</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Proximity effect in spot welding of stainless steel
Sweeney, James Augustus.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1936; Includes bibliographical references (leaves 59-60).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Factor influencing the translucency of porcelain</title>
<link href="https://hdl.handle.net/1721.1/155025" rel="alternate"/>
<author>
<name>Tarnopol, Milton Sidney.</name>
</author>
<id>https://hdl.handle.net/1721.1/155025</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Factor influencing the translucency of porcelain
Tarnopol, Milton Sidney.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mining and Metallurgy, 1936; Includes bibliographical references (leaves 87-89).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of an automatic sideslip control for aircraft,</title>
<link href="https://hdl.handle.net/1721.1/155022" rel="alternate"/>
<author>
<name>Kendall, Delvin E.</name>
</author>
<author>
<name>Whitcomb, David W.</name>
</author>
<id>https://hdl.handle.net/1721.1/155022</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Development of an automatic sideslip control for aircraft,
Kendall, Delvin E.; Whitcomb, David W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1945; Bibliography: leaves 115-117.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kirchhoff approximation for rough surface scattering</title>
<link href="https://hdl.handle.net/1721.1/155021" rel="alternate"/>
<author>
<name>Mou, Alex.</name>
</author>
<id>https://hdl.handle.net/1721.1/155021</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1993-01-01T00:00:00Z</published>
<summary type="text">Kirchhoff approximation for rough surface scattering
Mou, Alex.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1993; Includes bibliographical references (85-88).
</summary>
<dc:date>1993-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>International comparative analysis of training requirements for technical professionals : a case study of the nuclear power industry</title>
<link href="https://hdl.handle.net/1721.1/155018" rel="alternate"/>
<author>
<name>Mason, John Herbert.</name>
</author>
<id>https://hdl.handle.net/1721.1/155018</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1990-01-01T00:00:00Z</published>
<summary type="text">International comparative analysis of training requirements for technical professionals : a case study of the nuclear power industry
Mason, John Herbert.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1990; Includes bibliographical references (leaves 126-131).
</summary>
<dc:date>1990-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A method to evaluate the performance of a chemically enhanced paper laminate</title>
<link href="https://hdl.handle.net/1721.1/155015" rel="alternate"/>
<author>
<name>Eglowstein, Sheila Ruth.</name>
</author>
<id>https://hdl.handle.net/1721.1/155015</id>
<updated>2025-10-30T17:03:46Z</updated>
<published>1989-01-01T00:00:00Z</published>
<summary type="text">A method to evaluate the performance of a chemically enhanced paper laminate
Eglowstein, Sheila Ruth.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1989; Includes bibliographical references (leaves 66-68).
</summary>
<dc:date>1989-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The pulse amplifier in theory and experiment</title>
<link href="https://hdl.handle.net/1721.1/154980" rel="alternate"/>
<author>
<name>Tatel, Howard.</name>
</author>
<id>https://hdl.handle.net/1721.1/154980</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">The pulse amplifier in theory and experiment
Tatel, Howard.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1936; Includes bibliographical references (leaf 27).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical studies of the nature of metallic surfaces</title>
<link href="https://hdl.handle.net/1721.1/154979" rel="alternate"/>
<author>
<name>Thorpe, John.</name>
</author>
<id>https://hdl.handle.net/1721.1/154979</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">Optical studies of the nature of metallic surfaces
Thorpe, John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1936; Includes bibliographical references (leaf 23).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Business research as a tool for railway management</title>
<link href="https://hdl.handle.net/1721.1/154976" rel="alternate"/>
<author>
<name>Rugge, George.</name>
</author>
<id>https://hdl.handle.net/1721.1/154976</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1942-01-01T00:00:00Z</published>
<summary type="text">Business research as a tool for railway management
Rugge, George.
Thesis: M.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1942; Includes bibliographical references (leaves [82]-[85]
</summary>
<dc:date>1942-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The scattering of charged particles by non-adiabatic magnetic fields</title>
<link href="https://hdl.handle.net/1721.1/154975" rel="alternate"/>
<author>
<name>Clarke, John F.,
            1939-</name>
</author>
<id>https://hdl.handle.net/1721.1/154975</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">The scattering of charged particles by non-adiabatic magnetic fields
Clarke, John F.,
            1939-
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1964; Includes bibliographical references (leaf 48).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fluorescent gaseous tracers for three dimensional flow visualization.</title>
<link href="https://hdl.handle.net/1721.1/154973" rel="alternate"/>
<author>
<name>Epstein, Alan Harry.</name>
</author>
<id>https://hdl.handle.net/1721.1/154973</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Fluorescent gaseous tracers for three dimensional flow visualization.
Epstein, Alan Harry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic temperature regulation in portable life support systems.</title>
<link href="https://hdl.handle.net/1721.1/154972" rel="alternate"/>
<author>
<name>Ephrath, Arye Ravoz.</name>
</author>
<id>https://hdl.handle.net/1721.1/154972</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Automatic temperature regulation in portable life support systems.
Ephrath, Arye Ravoz.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Bibliography: leaves 69-72.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aerodynamics of wind turbine with tower disturbances</title>
<link href="https://hdl.handle.net/1721.1/154964" rel="alternate"/>
<author>
<name>Chung, Song Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/154964</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Aerodynamics of wind turbine with tower disturbances
Chung, Song Y.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cell free synthesis of ferritin using the Modified Reticulocyte Lysate System.</title>
<link href="https://hdl.handle.net/1721.1/154963" rel="alternate"/>
<author>
<name>Clark, Nathaniel Goodwin.</name>
</author>
<id>https://hdl.handle.net/1721.1/154963</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Cell free synthesis of ferritin using the Modified Reticulocyte Lysate System.
Clark, Nathaniel Goodwin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lift and drag performance of a systematic series of yacht hull models</title>
<link href="https://hdl.handle.net/1721.1/154962" rel="alternate"/>
<author>
<name>Clemmer, George L.
            (George Lewis)</name>
</author>
<id>https://hdl.handle.net/1721.1/154962</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Lift and drag performance of a systematic series of yacht hull models
Clemmer, George L.
            (George Lewis)
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1978; Bibliography: leaf 103.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heave, sway and roll of ship-like cylinders in waters of finite depth.</title>
<link href="https://hdl.handle.net/1721.1/154961" rel="alternate"/>
<author>
<name>Chung, Hin Chew.</name>
</author>
<id>https://hdl.handle.net/1721.1/154961</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Heave, sway and roll of ship-like cylinders in waters of finite depth.
Chung, Hin Chew.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The friction effect in the flaw distribution determination by the hardness indentation test.</title>
<link href="https://hdl.handle.net/1721.1/154960" rel="alternate"/>
<author>
<name>Chiu, Paul Tsan-Tin.</name>
</author>
<id>https://hdl.handle.net/1721.1/154960</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The friction effect in the flaw distribution determination by the hardness indentation test.
Chiu, Paul Tsan-Tin.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Bibliography: leaves 25-27.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The processing and properties of chitosan membranes.</title>
<link href="https://hdl.handle.net/1721.1/154959" rel="alternate"/>
<author>
<name>Clark, Randall Bradley.</name>
</author>
<id>https://hdl.handle.net/1721.1/154959</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The processing and properties of chitosan membranes.
Clark, Randall Bradley.
Thesis: M.S., Massachusetts Institute of Technology, Department of Materials Science and Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quality differences in male and female vocoded speech.</title>
<link href="https://hdl.handle.net/1721.1/154958" rel="alternate"/>
<author>
<name>Christopher, Deborah Kaye.</name>
</author>
<id>https://hdl.handle.net/1721.1/154958</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Quality differences in male and female vocoded speech.
Christopher, Deborah Kaye.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computer optimization of dry and wet/dry cooling tower systems for large fossil and nuclear power plants.</title>
<link href="https://hdl.handle.net/1721.1/154957" rel="alternate"/>
<author>
<name>Choi, Michael Kam-wah.</name>
</author>
<id>https://hdl.handle.net/1721.1/154957</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Computer optimization of dry and wet/dry cooling tower systems for large fossil and nuclear power plants.
Choi, Michael Kam-wah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The growth of a small firm, its implications for management style, and the influence on corporate character by the senior executive</title>
<link href="https://hdl.handle.net/1721.1/154956" rel="alternate"/>
<author>
<name>Clope, Sara Jane.</name>
</author>
<author>
<name>Osborn, Edward Kingsbury.</name>
</author>
<author>
<name>Pototsky, John Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/154956</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">The growth of a small firm, its implications for management style, and the influence on corporate character by the senior executive
Clope, Sara Jane.; Osborn, Edward Kingsbury.; Pototsky, John Edward.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1978; Bibliography: leaves 167-170.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Performance High-Power Inductor Design for High-Frequency Applications</title>
<link href="https://hdl.handle.net/1721.1/154375" rel="alternate"/>
<author>
<name>Joisher, Mansi Vipul</name>
</author>
<id>https://hdl.handle.net/1721.1/154375</id>
<updated>2024-05-02T03:45:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High-Performance High-Power Inductor Design for High-Frequency Applications
Joisher, Mansi Vipul
The performance and size of power electronic circuits are greatly impacted by magnetic components. This is especially true at Radio Frequencies (RF) of many MHz and above. In the High Frequency (HF, 3-30 MHz) range, coreless (or "air-core") inductors with a typical quality factor (Q) of 200-300 are conventionally used and are often the major contributor to the overall system’s loss and size. Even when they can achieve high Q, air-core inductors can induce electromagnetic interference (EMI) and eddy current loss in surrounding components, thus limiting system miniaturization. With the recent advancements in high-performance, high-frequency magnetic materials, there is interest in leveraging these magnetic materials at RF and replacing lossy air-core inductors with cored inductors to achieve an improved combination of size and loss. This thesis investigates high-power, high-frequency, high-Q cored inductors. This approach leverages high-frequency high-performance magnetic materials, core geometry, and quasi-distributed gaps to achieve a self-shielded inductor that emits less flux outside its physical volume and can be placed close to other circuit components without inducing EMI or eddy current loss. The performance and self-shielding characteristics of the proposed design procedure are experimentally verified for a 500 nH inductor (Q = 1150) designed to operate at 13.56MHz with a peak ac current of up to 80 Amps.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Not Allowed: Practicing Process</title>
<link href="https://hdl.handle.net/1721.1/154365" rel="alternate"/>
<author>
<name>Ugorji, Amanda</name>
</author>
<id>https://hdl.handle.net/1721.1/154365</id>
<updated>2024-05-02T03:27:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Not Allowed: Practicing Process
Ugorji, Amanda
Not Allowed: Practicing Process is a response to my dissatisfaction with the status quo of architectural pedagogy as I have experienced it. By shifting attention away from the architectural product and onto the process, I redefine the thesis project's success through encounters of learning, struggle, and uncomfortable ambiguity.&#13;
&#13;
The project explores ideas of co-authorship, building practice, and embedding meaning in architectural pedagogy and work. It has challenged concepts such as the urgency of production, the erasure of identity in pedagogy and practice, and the systemic harm architecture perpetuates on both the personal and on the global scale. To carry out the thesis's goals, I armed myself with tools like self-reflection, expectation of change, intentional conversation, and curiosity. The work allowed for topic change, dramatic restructuring, and lapses in rigor. It found value in opening multiple paths and diverging from linearity, although it accepts that the effort expended has been cumulative.&#13;
&#13;
Instead of a thesis review, the project culminated in a thesis reflection where I asked attendees to partake in a small group discussion and share their thoughts on provided prompts. The results of the process look like an intentionally organized collection of thoughts and conducted discussions that raise more questions than they answer.&#13;
&#13;
I have identified guiding questions on this thesis journey, such as: What ways of thinking are privileged in architecture? What modes of production are validated? What do I limit myself to when I am bound by architecture's definition of rigor? How much energy should I spend gaining validation? What are the criteria for failure? What if the ways I derive value in my work devalue my project in the normative discipline? Does that matter? If we make better work when we are full and present, what do we need to be full and present? If the social contracts we hold outside of architecture education spaces are constantly violated, what new social contracts must we build? How can we preserve them? If the pedagogy has not been serving me as I need it to, how have I been working to develop infrastructure for myself? How can I continue to do so moving forward?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reticle Stage Actuation Concepts for High Acceleration&#13;
Trajectories in Next-generation Photolithography Tools</title>
<link href="https://hdl.handle.net/1721.1/154363" rel="alternate"/>
<author>
<name>Seaberg, Charles Byron</name>
</author>
<id>https://hdl.handle.net/1721.1/154363</id>
<updated>2024-05-02T04:01:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Reticle Stage Actuation Concepts for High Acceleration&#13;
Trajectories in Next-generation Photolithography Tools
Seaberg, Charles Byron
In photolithography scanning tools, the functional patterns of integrated circuit layers are defined with critical dependence on the actuation of reticle and wafer stages along precisely synchronized trajectories. Patterning throughput of such tools is limited based on the velocity and acceleration at which the stages are actuated. Modern tools require sub-nanometer accuracy of stages along these trajectories during constant-velocity scan exposure to create feature sizes on the order of nanometers. At the ends of the constant velocity scans, high acceleration trajectories are used to reverse the scan velocity in minimal time. The next-generation of photolithography tools will require more aggressive trajectories along with the development of energy-efficient actuation solutions with higher force and precision capabilities to implement these demanding motion profiles.&#13;
&#13;
In this thesis, we propose actuation concepts which may enable 100g reticle stage turnaround accelerations, and explore two such concepts in depth. The first concept is an array of piezoelectric stack actuators attached to the long-stroke stage which mechanically contact the short-stroke stage only during turnaround. In this context, we perform a scaled two degree-of-freedom experiment in which we attempt to control the contact of a 840 g payload moving at a velocity of 80 mm/s using a 50 &#120583;m stroke piezo stack actuator which is driven open-loop. We are able to use the piezo current signal to detect mechanical contact with an estimated delay of 6-16 &#120583;s. We are unable to control the dynamics of the contact during which the measured peak contact force of 150 N exceeds the planned amount by 80% and results in the payload bouncing off the actuator.&#13;
&#13;
The second actuation concept we consider in theory is the use of dual-chamber pneumatic springs as energy storage devices to create turnaround forces for the long-stroke stage acceleration. We examine the use of such pneumatic springs in parallel with a conventional long-stroke linear motor to create a stage topology in which reactive power is stored and returned into kinetic energy. We study thermal aspects of the spring behavior first under an adiabatic assumption and then using a one-dimensional thermal model for heat flow through the piston chamber walls. The proposed design shows promise to reduce the motor power dissipation by 90% and the motor amplifier electrical power by 70%, showing promise for further study. Such energy savings can contribute to significant reduction in the energy consumption of lithography tools.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Destroy Your School: Building with Kids to Reimagine Learning</title>
<link href="https://hdl.handle.net/1721.1/154359" rel="alternate"/>
<author>
<name>Rotman, Katherine</name>
</author>
<id>https://hdl.handle.net/1721.1/154359</id>
<updated>2024-05-02T03:40:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Destroy Your School: Building with Kids to Reimagine Learning
Rotman, Katherine
Too often, our education is disconnected from the physical space in which we learn. Lessons plans and curricula disregard the spatial and physical spaces that define the educational experience. The disciplinary gap between architectural and educational discourse is in need of attention, and bridging this gap is at the heart of my thesis. I seek to discern methods to better equip our youth for the future. Questions of how and where we learn and share knowledge are crucial to the formation of values in the next generation. Our current moment necessitates extensive collective change and a thorough reconsideration of the values embedded in our systems of education. How does our built environment inform our learning experience? How does pedagogy shape our world, and how in turn is our world shaped by pedagogy? How can notions of care and stewardship be generated by pedagogy? How can a shift in pedagogy shape classrooms, schools, and neighborhoods? This thesis approaches these questions through the under-considered and often-forgotten problem of middle school age education. The project examines and puts forward a new pedagogy that aims to instill architectural values of collaboration, community, mentorship, interdisciplinarity, improvisation, and material opportunism through education in order to shape the fabric of our society. The three years of middle school play an enormous role in shaping the next generation. At this critical point, students transition out of learning through play, inquiry, and experimentation to learning as adults in a results-based, structured, and standardized fashion. Introducing a design-build pedagogy into the middle school curriculum becomes not only an opportunity to build a greater sense of autonomy for young learners by elevating students’ existing skills embedded in play and experimentation, but a chance to disrupt the general assumptions we grow up with about our built environment. The design pedagogy I propose gives young adolescents a new set of tools to participate and take action in shaping their education, classroom, and community. At its core, this project aims to enable young learners to find agency and empowerment through their built environment. With the reimagined classroom as site, this thesis advocates for a porous community-wide system of learning and engagement.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determination of volatile nitrosamines in foods and other environmental samples</title>
<link href="https://hdl.handle.net/1721.1/154356" rel="alternate"/>
<author>
<name>Essigmann, John.</name>
</author>
<id>https://hdl.handle.net/1721.1/154356</id>
<updated>2025-10-31T20:12:37Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Determination of volatile nitrosamines in foods and other environmental samples
Essigmann, John.
Methods were investigated for determination of volatile nitrosamines which have been reported to occur  in foods and other environmental samples. Nitrosamines were removed from foods using a Likens-Nickerson extractor; 81% of added dimethylnitrosamine and 110% of added diethylnitrosamine were recovered from 10 ng/g spiked aqueous solutions. The factors affecting recovery of these nitrosamines were investigated. The usefulness of Freon-11 as an extracting solvent for nitrosamines was investigated using both batch serial and continuous liquid-liquid extraction. Potential gas chromatographic  (GC) interferences were removed from food extracts by an acid extraction step and, when needed, by liquid column chromatography on alumina and silica gel. Dilute solu­tions containing nitrosamines were analyzed directly  using a GC solvent stripping technique. Nitrosamines  were detected with the Coulson electrolytic conductivity detector operated in the pyrolytic mode and with a flame ionization detector. The sensitivities of these detectors were compared for selected alkyl and heterocyclic nitros­amines. The specificity of the Coulson detector was demonstrated for analysis of extracts of meat and fish samples. Additional clean-up of food extracts is re­quired to insure identification of nitrosamines by combined GC-mass spectrometry. A method employing chromatographic equilibration (frontal analysis) was investigated for determination of dimethylnitrosamine in the air.
Thesis: S.M., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1972; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 171-180).
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The efficiency of the vertical tail for different wing-fuselage combinations, particularly at high angles of attack</title>
<link href="https://hdl.handle.net/1721.1/154353" rel="alternate"/>
<author>
<name>Shumowsky, Stanislaw A.
            (Stanislaw Anton)</name>
</author>
<id>https://hdl.handle.net/1721.1/154353</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1936-01-01T00:00:00Z</published>
<summary type="text">The efficiency of the vertical tail for different wing-fuselage combinations, particularly at high angles of attack
Shumowsky, Stanislaw A.
            (Stanislaw Anton)
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautical Engineering, 1936; Includes bibliographical references (leaves 68-70).
</summary>
<dc:date>1936-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lean manufacturing--from automotive to aerospace</title>
<link href="https://hdl.handle.net/1721.1/154350" rel="alternate"/>
<author>
<name>Darris, Frederick E.
            (Frederick Eugene)</name>
</author>
<id>https://hdl.handle.net/1721.1/154350</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1997-01-01T00:00:00Z</published>
<summary type="text">Lean manufacturing--from automotive to aerospace
Darris, Frederick E.
            (Frederick Eugene)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1997; Includes bibliographical references (leaf 75).
</summary>
<dc:date>1997-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimation of the statical stability curve of a ship from hull coefficients,</title>
<link href="https://hdl.handle.net/1721.1/154349" rel="alternate"/>
<author>
<name>Ramsey, Lyle B.</name>
</author>
<author>
<name>Latimer, John P.</name>
</author>
<id>https://hdl.handle.net/1721.1/154349</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Estimation of the statical stability curve of a ship from hull coefficients,
Ramsey, Lyle B.; Latimer, John P.
Thesis: M.S., Massachusetts Institute of Technology, Department of Naval Architecture and Marine Engineering, 1945; Bibliography: leaf 83.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laminar boundary layer in a partially ionized diatomic gas behind a moving shock</title>
<link href="https://hdl.handle.net/1721.1/154348" rel="alternate"/>
<author>
<name>Moh, Tzu-Chung.</name>
</author>
<id>https://hdl.handle.net/1721.1/154348</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Laminar boundary layer in a partially ionized diatomic gas behind a moving shock
Moh, Tzu-Chung.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1964; Includes bibliographical references (leaf [32]).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A direct measurement of intraocular stray light.</title>
<link href="https://hdl.handle.net/1721.1/154343" rel="alternate"/>
<author>
<name>Larson, Ernest Theodore.</name>
</author>
<id>https://hdl.handle.net/1721.1/154343</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">A direct measurement of intraocular stray light.
Larson, Ernest Theodore.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1945; Bibliography: leaf 28.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motional transients in power selsyns.</title>
<link href="https://hdl.handle.net/1721.1/154342" rel="alternate"/>
<author>
<name>Kaci, M. M. E.</name>
</author>
<id>https://hdl.handle.net/1721.1/154342</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Motional transients in power selsyns.
Kaci, M. M. E.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1945; Bibliography: leaf 76.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Purification of formylglycinamide ribotide amidotransferase from chicken liver.</title>
<link href="https://hdl.handle.net/1721.1/154340" rel="alternate"/>
<author>
<name>Mizobuchi, Kiyoshi.</name>
</author>
<id>https://hdl.handle.net/1721.1/154340</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Purification of formylglycinamide ribotide amidotransferase from chicken liver.
Mizobuchi, Kiyoshi.
Thesis: M.S., Massachusetts Institute of Technology, Department of Biology, 1964
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A picture is worth a thousand words: the myth behind the international art mania.</title>
<link href="https://hdl.handle.net/1721.1/154336" rel="alternate"/>
<author>
<name>Barrett, Maudann Borthwick.</name>
</author>
<id>https://hdl.handle.net/1721.1/154336</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">A picture is worth a thousand words: the myth behind the international art mania.
Barrett, Maudann Borthwick.
Thesis: M.S., Massachusetts Institute of Technology, Department of Economics, 1974; Includes 12 unnumbered leaves.; Bibliography: leaves 115-118.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Thermoelectric power of indium antimonide.</title>
<link href="https://hdl.handle.net/1721.1/154335" rel="alternate"/>
<author>
<name>Eser, Erten Sadullah.</name>
</author>
<id>https://hdl.handle.net/1721.1/154335</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Thermoelectric power of indium antimonide.
Eser, Erten Sadullah.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An industrialized housing system in wood.</title>
<link href="https://hdl.handle.net/1721.1/154334" rel="alternate"/>
<author>
<name>Fan, Samuel Sze Leung.</name>
</author>
<id>https://hdl.handle.net/1721.1/154334</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">An industrialized housing system in wood.
Fan, Samuel Sze Leung.
Thesis: M. Arch. A.S., Massachusetts Institute of Technology, Department of Architecture, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Telephonic transmission of artificial pacemaker parameters.</title>
<link href="https://hdl.handle.net/1721.1/154331" rel="alternate"/>
<author>
<name>Ferla Delor, Guillermo Sergio.</name>
</author>
<id>https://hdl.handle.net/1721.1/154331</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Telephonic transmission of artificial pacemaker parameters.
Ferla Delor, Guillermo Sergio.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>R &amp; D task accomplishment at a U.S. Army Material Command Corporate Laboratory.</title>
<link href="https://hdl.handle.net/1721.1/154328" rel="alternate"/>
<author>
<name>Falabella, Gaetano.</name>
</author>
<id>https://hdl.handle.net/1721.1/154328</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">R &amp; D task accomplishment at a U.S. Army Material Command Corporate Laboratory.
Falabella, Gaetano.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Consumer payment practices and preferences in the Boston metropolitan area.</title>
<link href="https://hdl.handle.net/1721.1/154327" rel="alternate"/>
<author>
<name>Fazio, Vincent John.</name>
</author>
<id>https://hdl.handle.net/1721.1/154327</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Consumer payment practices and preferences in the Boston metropolitan area.
Fazio, Vincent John.
Thesis: M.S., Massachusetts Institute of Technology, Alfred P. Sloan School of Management., 1972; Bibliography: leaves 125-126.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reentry of the expatriate into the multinational firm</title>
<link href="https://hdl.handle.net/1721.1/154326" rel="alternate"/>
<author>
<name>Sharp, Robert C.</name>
</author>
<id>https://hdl.handle.net/1721.1/154326</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Reentry of the expatriate into the multinational firm
Sharp, Robert C.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1981; Bibliography: leaves 160-168.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Metal complexes as models for vitamin B₆ catalysis</title>
<link href="https://hdl.handle.net/1721.1/154250" rel="alternate"/>
<author>
<name>Weinstein, Georgia Nan.</name>
</author>
<id>https://hdl.handle.net/1721.1/154250</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Metal complexes as models for vitamin B₆ catalysis
Weinstein, Georgia Nan.
Chapter I. Historical introduction to Vitamin B₆ Complexes. Chapter II. The aldimine complexes N(Salicylidene)glycinato and valinatozinc(II), N,~pyridoxylidene)valinatocopper(II) monohydrate and N-(3-Hydroxypyridyl-2-methylene)valinatocopper( II) hemihydrate have been prepared from L̳-valine. Synthetic methods and characterization data are given. Also prepared were the bis-chelate amino acid ester complexes, Bis[N- (2-ethoxycarbonyl-l-propyl)salicylaldiminato]copper(II) and Bis[N-(3-ethoxycarbonyl-2-propyl)salicylaldiminato]copper(II). The inertness of these two complexes to H-D exchange contrasts with the ready exchange in the absence of base of the complexes derived from a-amino acids. This result shows that facile exchange and racemization properties of Bis[N-(alkoxycarbonylalkyl)salicylaldimino] metal(II) complexes derive principally from the direct attachment of the electron-withdrawing HC=NM and COOC₂H₅ groups to the asymmetric center. The base-catalyzed racemization rates of four copper(II)-aldimine complexes in 95% ethanol at 50° were found to increase in the order N-Salicylidene-L̳-valinatocopper( II), Cu(sal-L̳-val) &lt;&lt; N-Pyridoxylidene-L̳-valinatocopper(II) &lt;/- N-3-Hydroxypyridyl-2-methylene-L̳-valinatocopper(II) &lt;N-4-NO₂ - Salicylidene-L̳-valinatocopper(II). This order is essentially the same as that of qualitative catalytic effectiveness of the constituent o̲-hydroxyarylcarbonyl compounds in nonenzyrnatic transamination and reinforces in semiquantitative fashion the prevailing model of ligand electronic features requisite to catalytic activity of these compounds.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1972; Cataloged from the official PDF version of thesis. Vita.; Includes bibliographical references (pages 58-62).
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Longitudinal dispersion in non-uniform porous media</title>
<link href="https://hdl.handle.net/1721.1/154247" rel="alternate"/>
<author>
<name>Mohtadullah, Khalid.</name>
</author>
<id>https://hdl.handle.net/1721.1/154247</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Longitudinal dispersion in non-uniform porous media
Mohtadullah, Khalid.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1964; Includes bibliographical references (leaves 78-81).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of fish type motion and propulsion systems</title>
<link href="https://hdl.handle.net/1721.1/154246" rel="alternate"/>
<author>
<name>Mindell, Arnold Perry.</name>
</author>
<id>https://hdl.handle.net/1721.1/154246</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">A study of fish type motion and propulsion systems
Mindell, Arnold Perry.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1964; Includes bibliographical references (leaves 44-45).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reactions of phenyl(trihalomethyl)mercurials with olefins</title>
<link href="https://hdl.handle.net/1721.1/154245" rel="alternate"/>
<author>
<name>Minasz, Richard Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/154245</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Reactions of phenyl(trihalomethyl)mercurials with olefins
Minasz, Richard Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1964; Vita.; Includes bibliographical references (leaves 42-43).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduction of intersymbol interference in data transmission by automatic time-domain equalization</title>
<link href="https://hdl.handle.net/1721.1/154244" rel="alternate"/>
<author>
<name>Mohn, William S.</name>
</author>
<id>https://hdl.handle.net/1721.1/154244</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">Reduction of intersymbol interference in data transmission by automatic time-domain equalization
Mohn, William S.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1964; Includes bibliographical references (leaves 40-41).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sniffer : a system that understands bugs</title>
<link href="https://hdl.handle.net/1721.1/154232" rel="alternate"/>
<author>
<name>Shapiro, Daniel G.</name>
</author>
<id>https://hdl.handle.net/1721.1/154232</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Sniffer : a system that understands bugs
Shapiro, Daniel G.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1981; Bibliography: leaves 59-60.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a car utilization audit.</title>
<link href="https://hdl.handle.net/1721.1/154229" rel="alternate"/>
<author>
<name>Nowicki, Victor.</name>
</author>
<id>https://hdl.handle.net/1721.1/154229</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Design of a car utilization audit.
Nowicki, Victor.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1978; Includes bibliographical references.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of planning effectiveness : a case study.</title>
<link href="https://hdl.handle.net/1721.1/154228" rel="alternate"/>
<author>
<name>Siever, Ellen Carol.</name>
</author>
<id>https://hdl.handle.net/1721.1/154228</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">An analysis of planning effectiveness : a case study.
Siever, Ellen Carol.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1977; Bibliography : leaves 54-55.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Framework for Multi-Messenger&#13;
Astronomy</title>
<link href="https://hdl.handle.net/1721.1/154207" rel="alternate"/>
<author>
<name>Koenig, Alexander P.</name>
</author>
<id>https://hdl.handle.net/1721.1/154207</id>
<updated>2024-04-18T03:06:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Systems Framework for Multi-Messenger&#13;
Astronomy
Koenig, Alexander P.
Multi-messenger (and more broadly, panchromatic) astronomy regards the use of multimodal information — incident photons, gravitational waves, neutrinos, and cosmic rays — to form astrophysical inferences. Since each messenger interacts uniquely with the dynamics of the phenomena in question, drawing information from multiple messengers poses a more complete probe of the universe. However, the exact inference method is scenario-specific, and we lack a general means to design multi-messenger instrument networks to best formulate scientific knowledge. To this end, this thesis presents a framework using probabilistic graph models to simulate the performance of heterogeneous instrument networks, with applications to two case studies.&#13;
&#13;
The first case study regards the measurement of the Hubble parameter, i.e. the rate of expansion of the universe, with joint gravitational-wave and electromagnetic detection of neutron star mergers — cosmological standard sirens. This case study predicts [formula] joint detections by the end of the 2020s, likely sufficient to measure the Hubble parameter with 4% uncertainty. Furthermore, &#119978;(10⁵) instrument networks are simulated. The most promising configurations rely on a highly-sensitive set of ground-based interferometers with wide geographic distribution along with a set of narrow-field, large-aperture ground- or space-based telescopes.&#13;
&#13;
The second case study regards using star tracker imagery from LEO satellite constellations to improve our knowledge of resident space objects Ð active satellites and debris. Traditionally, orbit determination relies on bespoke ground-based radar systems which are increasingly insufficient to meet the needs of LEO satellite operators. For two simulated objects, this case study shows star trackers could supplement but not replace radars to improve knowledge: including imagery from 10³ satellites could reduce positional uncertainty by a factor of ∼3 compared to a radar-only network.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Deepfakes with Human Help to Help Humans&#13;
Detect Deepfakes</title>
<link href="https://hdl.handle.net/1721.1/154206" rel="alternate"/>
<author>
<name>Fosco, Camilo L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154206</id>
<updated>2024-04-18T03:39:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Detecting Deepfakes with Human Help to Help Humans&#13;
Detect Deepfakes
Fosco, Camilo L.
Fake or manipulated video media (“deepfakes”) pose a clear threat to the integrity of online spaces that rely on video, from social media, to news media, to video conferencing platforms. To the human eye, these computer-generated fake videos are increasingly indistinguishable from genuine videos [45, 20]. Computer vision models, however, can achieve impressive success at deepfake detection. Thus, the future of deepfake detection for humans may become a problem of AI-assisted decision-making, where humans must incorporate the output of a machine learning model into their judgment process. Previous work on AI-assisted decision making indicates that the design and format of a decision aid strongly determines whether it will impact human behavior [66, 60, 14, 26, 4]. In the domain of deepfake signaling, traditional methods of flagging manipulated video have relied on text-based prompts. However, recent studies indicate relatively low rates of compliance when the model’s prediction is conveyed using text: in one study, participants shown model predictions via text updated their response only 24% of the time, and switched their response (from ”real” to ”fake”, or vice versa) only 12% of the time [20]. More innovative approaches have been proposed, such as showing users a heatmap of regions predicted to be manipulated [8], but this did not increase acceptance rates relative to textbased indicators. Overall, to make an impact, the development of deepfake detection models must proceed alongside the exploration of innovative and effective ways to alert human users to a video’s authenticity. &#13;
&#13;
In this thesis, we present an analysis of current solutions to this issue, and examine methodologies for both improving automated deepfake detection while generating better indicators of doctored media to help humans spot deepfakes. To work towards this goal, we first collect human annotations that highlight parts of videos that humans find unnatural or indicative of doctoring. We use this data as additional supervision to train an artifact attention module that generates ”heat volumes” highlighting areas of a deepfake video that evidence its fake nature. This module is in turn leveraged to both improve classifier performance as well as generate our novel visual indicators (described below). This construction is integral to our exploration of how human annotations can augment attention-based deepfake detection techniques, and we research for the first time the feasibility of exacerbating artifacts in deepfake videos to facilitate early detection from a human perspective. &#13;
&#13;
As the quality of doctored videos becomes more impressive, too many generated fakes are indistinguishable from a genuine video to the human eye. We believe that it is crucial for humans to be able to detect, at first glance, if a video is doctored or not. This limits the spread of misinformation by stopping it at the source. We achieve this by proposing a new visual indicator of doctoring that we call deepfake caricatures: a targeted distortion that reveals the fake nature of deepfakes, while rendering real videos virtually untouched (see Figure 1-1). This targeted distortion takes the form of an amplification of unnatural areas in a fake video, dubbed artifacts in this manuscript. &#13;
&#13;
This thesis introduces a novel framework that provides strong classical deepfake detection, but crucially also creates this compelling visual indicator for fake videos by amplifying artifacts, making them more detectable to human observers. Because humans tend to be highly sensitive to distortions in faces, we hypothesize that focusing our visual indicator on amplifying artifacts is likely to yield a highly detectable and compelling visual indicator. We introduce a new model, “CariNet”, that identifies key artifacts in deepfakes using our novel Artifact Attention Module. This module leverages both human supervision and machine supervision to learn what distortions are most relevant to humans. CariNet then generates deepfake caricatures using a Caricature Generation Module that magnifies unnatural areas in fake videos, making them more visible to human users. We make three primary contributions: &#13;
• We develop two annotation tools to (A) filter deepfakes according to their ease of detection, and (B) collect human annotations of fake and unnatural areas (artifacts) in doctored videos. This process yields a dataset of over 11K annotations across 1000 videos. &#13;
• We develop a framework for identifying video artifacts that are relevant to humans. Allowing our deepfake detector to leverage this information boosts its accuracy by more than 5%, showing that human supervision can improve deepfake detection models.&#13;
• We generate deepfake caricatures, and show in a user study that they increase human deepfake detection accuracy by up to 40% compared to non-signalled deepfakes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modelling of Graphite Elements and Low Enriched Fuel Assemblies for a High Temperature Gas-Cooled Reactor</title>
<link href="https://hdl.handle.net/1721.1/154205" rel="alternate"/>
<author>
<name>Cohen, Lorne</name>
</author>
<id>https://hdl.handle.net/1721.1/154205</id>
<updated>2024-04-18T03:18:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modelling of Graphite Elements and Low Enriched Fuel Assemblies for a High Temperature Gas-Cooled Reactor
Cohen, Lorne
As the consequences of climate change continue to have worldwide impacts, innovations in nuclear energy are a necessity for decarbonizing electricity and process heat generation. To address the large capital costs and risk of large Pressurized Water Reactors (PWRs), Stewart et al. designed a Horizontal Compact High Temperature Gas-cooled Reactor (HC-HTGR), which Boston Atomics is seeking to commercialize. The HC-HTGR leverages the safety advantages of gas- cooled, TRISO fuelled reactors, and the economic advantages of a horizontal compact layout. Graphite assembly blocks create the channels that guide the helium coolant flow and contain the fuel compacts in the HC-HTGR. Given the low tensile strength of graphite, FEA analysis is required for predicting stresses within these components. The stresses are evaluated using the ASME code for graphite components in nuclear reactors. A 2D generalized plane strain model is used to predict the ASME equivalent stresses throughout assemblies at the inlet, midplane, and outlet over 15 years of steady state operation. The effects of creep, swelling, and thermal expansion are incorporated into the model. The results predict the maximum equivalent stress will not exceed the limit of 12 MPa from the ASME code. Large thermal stresses are induced due to the high midplane and outlet temperatures but are quickly reduced by irradiation effects. As expected, creep plays a significant role in reducing the stresses that are driven by irradiation shrinkage of the graphite block.&#13;
&#13;
The use of TRISO fuel in an HC-HTGR provides safety benefits but adds significant fuel costs due to the manufacturing and fuel enrichment price. To improve the economics of the reactor, multiple designs for low-enriched fuel assemblies are evaluated on a thermal-hydraulic, neutronic, and economic basis. The designs use a combination of UC and UO₂ fuel, with SiC composite and stainless-steel cladding. While each design meets the target reactivity, enrichment, and temperature limits, the most viable design for near-term deployment uses UO₂ fuel with 0.5 mm of stainless-steel cladding. This design has an enrichment of 4.249% and maximum fuel temperature of 1414°C, under the assumed conservative steady-state conditions. A preliminary analysis indicates a 38-60% reduction in fuel costs compared to the TRISO fuelled assembly for the same energy output. The wide use of UO₂ and stainless steel in the nuclear industry supports the near-term deployment of this assembly design, as both materials are licensed for use in nuclear reactors, unlike SiC composite cladding and UC fuel. This precedent also reduces uncertainties on the fuel cost since there are well established supply chains for both UO₂ and nuclear grade stainless steel.&#13;
&#13;
Additionally, in order to improve the performance of stainless-steel cladding, oxide dispersion strengthened (ODS) steel cladding samples fabricated with high velocity oxy-fuel deposition were investigated. The XRD and XRF analyses led to the conclusion that rapid cooling after deposition results in an amorphous microstructure with a crystalline chromium phase. The bulk material is brittle, as confirmed by ring compression tests, motivating improvement in the fabrication process by the manufacturer.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Impact of Communication Delay on Mission Control as an Effective Team Member with the Crew</title>
<link href="https://hdl.handle.net/1721.1/154200" rel="alternate"/>
<author>
<name>Grace, Sideena</name>
</author>
<id>https://hdl.handle.net/1721.1/154200</id>
<updated>2024-04-18T03:20:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Impact of Communication Delay on Mission Control as an Effective Team Member with the Crew
Grace, Sideena
The National Aeronautics and Space Administration (NASA) aims to send humans to Mars in the coming decade. However, the significant communication delay of up to 22 minutes one way poses challenges for Mission Control (MC) in fulfilling its role as an effective team member with the crew, potentially jeopardizing mission safety and success. Existing research on communication delay has primarily focused on the crew, neglecting the impact on MC. This study addresses this gap by investigating the impact of communication delay on MC’s role as a team member and proposes a protocol to improve communication between MC and the crew. To analyze the impact of communication delay, data from high-fidelity analog studies and the International Space Station (ISS) were examined. These studies covered scenarios with delays ranging from seconds to 20 minutes, communication blackouts, and mission durations up to 520 days. Tasks of varying complexity were evaluated to assess MC’s ability to support the crew. Additionally, existing protocols were evaluated using subjective ratings and compliance analysis. The analysis indicated that communication delay significantly impairs MC’s effectiveness as a team member, evidenced by common challenges identified in the studies. These challenges include difficulty for MC in understanding the crew’s needs and maintaining situational awareness due to communication breakdowns. As a result, MC faced challenges in providing consistent and accurate support to the crew. The delayed recovery from these challenges led to reduced reliance on MC by the crew, as their role was not always seen as the most efficient option for seeking support. In response, a new protocol focusing on tone was developed to establish effective and respectful communication between MC and the crew, to mitigate the effects of these identified challenges. Furthermore, two key recommendations emerge from the analysis: ensuring time delay consistency and standardizing communication delay implementation. These recommendations aim to optimize the effectiveness of protocols and provide a better understanding of their impact in addressing communication delay. Understanding the impact of communication delay on both MC and the crew is vital for developing protocols that enhance effective communication and teamwork during the mission. These findings contribute to optimizing protocols for future studies and preparing for the Mars mission.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of a Student Operated Production Facility using Discrete Event Simulation and Continuous Improvement</title>
<link href="https://hdl.handle.net/1721.1/154199" rel="alternate"/>
<author>
<name>Greene, Ethan Logan</name>
</author>
<id>https://hdl.handle.net/1721.1/154199</id>
<updated>2024-04-18T04:03:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development of a Student Operated Production Facility using Discrete Event Simulation and Continuous Improvement
Greene, Ethan Logan
The Device Realization Laboratory at MIT is committed to developing accessible, affordable devices for hands-on learning experiences in smart manufacturing. The laboratory’s main method is the desktop fiber extrusion device, FrED, produced in the student-operated and -managed FrED Factory. This paper chronicles an intensive eight-month project directed toward enhancing the efficiency and throughput of the&#13;
FrED Factory.&#13;
&#13;
The project began with a systematic analysis of the intricacies of the fiber extrusion device, the factory, and the associated manufacturing processes. A key component of the project was the development of a digital twin, leveraging discrete event simulation to amplify the modeling and analytic capabilities of the student operators.&#13;
&#13;
A comprehensive characterization of the initial state of operations was conducted, revealing the existence of hidden factories and various types of waste. Strategic, iterative solutions were then formulated and implemented, driving significant improvements over time. The project incorporated 5S methodologies, laying the groundwork for a continuous improvement program, and executed a Kaizen event focusing on the underutilized 3D printing farm that was plagued with printing failures.&#13;
&#13;
Key results from the Kaizen event included reducing print cycle times, improving printer utilization, reducing print failure rates, and boosting the 3D printer farm throughput. The project achieved a substantial reduction in calibration frequency and part defects through a dual approach: minimizing vibration and storage rack swaying issues, and decreasing bed-leveling variation with the print beds, thereby further enhancing utilization. However, the most significant outcome was realized through the alleviation of manufacturing constraints on printer configurations, which led to a 4.2x improvement in the theoretical throughput of the 3D printers.&#13;
&#13;
The project’s journey and results offer invaluable insights and a replicable model for future implementation of student ran production facilities in other university laboratories, highlighting the importance of continuous improvement and the power of advanced technology in accelerating development and operational efficiency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>L'nuisimk (Speaking Mi'kmaq)</title>
<link href="https://hdl.handle.net/1721.1/154197" rel="alternate"/>
<author>
<name>Dennis, John J.</name>
</author>
<id>https://hdl.handle.net/1721.1/154197</id>
<updated>2024-04-18T03:19:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">L'nuisimk (Speaking Mi'kmaq)
Dennis, John J.
The Mi’kmaq have long been people that were hunter/gatherers, craft workers and artisans before our time. The beauty of Mi’kmaq language is its pure form of fluidity and its pure connection with the culture that has returned into the hands of its true owners, the Mi’kmaq. To return the language to the people is to undo all the harm inflicted by the Government that planned to annihilate a civilization or culture of people that were considered “savages” by taking away their mother tongue or the people’s language taught to them by their parents, grandparents, family, and elders within the community. The hardships that lay ahead of the Mi’kmaq who speak English is one that is embarrassing to some, an honor to others and a burden to many. There are many reasons as to why the Mi’kmaq speakers speak their mother tongue (teaching at schools, at homes and within the community), but for those that speak English, it is an utmost shame that it was not of their own doing. We will look at how to teach the next generation through baby talk, then transition to speaking at home with both parents and children. The next transition after will be moving to speaking with other community members within the area with basic conversational phrases. The true answer to solve this problem revolves around the fellow speakers, linguists and teachers that care about preserving this respectable language. The Mi’kmaq language must be placed back where it once belonged, back into the mouths of the Mi’kmaq.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gradient-Based Optimization of ReaxFF Parameters&#13;
Using Pytorch for the Study of Silica Precipitation</title>
<link href="https://hdl.handle.net/1721.1/154194" rel="alternate"/>
<author>
<name>Orlova, Yuliia</name>
</author>
<id>https://hdl.handle.net/1721.1/154194</id>
<updated>2024-04-18T03:55:25Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Gradient-Based Optimization of ReaxFF Parameters&#13;
Using Pytorch for the Study of Silica Precipitation
Orlova, Yuliia
Silica precipitation is a subject of big interest since it occurs in a wide variety of environmental and industrial processes. Even though there are many advances in atomistic simulation research of different forms of silica, the mechanism of silica precipitation has not been fully understood. We propose to study the following process using reactive force-field method (ReaxFF). Despite being a classical force field, ReaxFF can achieve quantum chemical accuracy once the optimal potential coefficients are found. However, the fitting of ReaxFF parameters is a challenge due to the complex functional form of the potential. Several techniques have been proposed to solve this problem, such as evolutionary algorithms, Monte Carlo methods, and simulated annealing. The stochastic nature of these methods requires millions of error evaluations to fit the parameters, which results in excessive optimization times. Recent advances in machine learning made it possible to drastically speed up the process by utilizing the gradient of the potential. In this work, the gradient-based optimization of reactive force-field parameters using Pytorch was performed. We have implemented ReaxFF potential as a Pytorch model. The model’s performance was validated against existing ReaxFF implementations. ReaxFF parameters were fitted to the dataset, which comprised 15345 geometries calculated using a long-range corrected hybrid functional &#120596;B97XD3.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring Place-Based Transit Service Equity in Chicago</title>
<link href="https://hdl.handle.net/1721.1/154192" rel="alternate"/>
<author>
<name>Swarney, Emma Pauline</name>
</author>
<id>https://hdl.handle.net/1721.1/154192</id>
<updated>2024-04-18T03:00:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Measuring Place-Based Transit Service Equity in Chicago
Swarney, Emma Pauline
How to equitably distribute public transit service is a highly topical subject facing transit agencies operating in North America. Recent social movements have reignited the debate around Civil Rights on public transit and resulted in increased scrutiny of transit planning practices. While many agencies are striving to incorporate more progressive equity analyses, these equity assessment methods have several shortcomings. For example, they have not addressed important questions such as how service levels can be meaingfully compared between city areas differing in geospatial characteristics (e.g. residential neighborhoods versus Central Business Districts), and what a sufficient level of transit service should be for an area to be considered equitably served.&#13;
&#13;
The goal of this thesis is to develop a new method for assessing place-based equity on a city-wide level, using Chicago and its transit system, the Chicago Transit Authority, as a case study. This method addresses several gaps in literature and practice, using historical passenger trips closely reflective of true system conditions, to measure the state of transit service. This thesis develops a method for determining what an equitable level of transit service should be while accounting for where an area is situated within the greater city geography.&#13;
&#13;
This method is applied to two datasets from different time periods, September 2019 and October 2022. The two time periods are compared to understand if and how service quality has changed. Two types of analyses are performed on the data, one illustrating the service quality of all trips originating in an area, and the other to specific destinations, highlighting the strengths and weaknesses of the transit system. A quantitative equity score for each area in Chicago is presented, demonstrating a full execution of the method. The method is also applied to a project under proposal, the Red Line Extension, quantifying the projected equity benefits, and demonstrating how the method can be applied in different contexts.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stock-constrained optimization of partially disassembled trusses</title>
<link href="https://hdl.handle.net/1721.1/154188" rel="alternate"/>
<author>
<name>Van Marcke, Albertine</name>
</author>
<id>https://hdl.handle.net/1721.1/154188</id>
<updated>2024-04-18T03:43:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Stock-constrained optimization of partially disassembled trusses
Van Marcke, Albertine
Reuse of structural components is a relatively unexplored area of research, with a lot of potential environmental impact. Structural reuse significantly reduces new material use, carbon emissions, and construction waste. This thesis shows a novel way of reusing structural components through partial disassembly of trusses into triangular components. A computational approach for quickly designing trusses with both new and recycled components is presented. The algorithm aggregates components row by row to fill a target area defined by the user, this is done by cutting the reused components where necessary and adding new material members and triangles where appropriate to prevent voids. In case the reusable inventory has a variety of component sizes, multiple designs can be generated. The workflow uses a genetic algorithm to explore and optimize these different designs, taking into account the user’s stock input and target dimensions. Three case studies, reusing realistic trusses, illustrate the algorithm’s applicability to existing truss inventories that could be reused.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aerodynamic and Thermal Considerations for an Antarctic Ice Penetrator</title>
<link href="https://hdl.handle.net/1721.1/154187" rel="alternate"/>
<author>
<name>Makikalli, Aaron R.</name>
</author>
<id>https://hdl.handle.net/1721.1/154187</id>
<updated>2024-04-18T03:12:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Aerodynamic and Thermal Considerations for an Antarctic Ice Penetrator
Makikalli, Aaron R.
The Seismo-Geodetic Ice Penetrator (SGIP) is a helicopter-deployed kinetic penetrator designed to deliver a Global Navigation Satellite System (GNSS) and geodesy-grade seismometer to the Ross Ice Shelf (RIS) in Antarctica such that the seismometer becomes buried 2 m deep in the ice, ensuring coupling with the ice shelf. This vehicle provides a means to obtain data informative of ocean-atmosphere-ice dynamics that has historically been challenging to gather due to the remoteness and extreme environment of the RIS. &#13;
&#13;
In order to ensure an appropriate impact velocity and angle, SGIP’s aft-body must be sized to produce a drag force that results in a target terminal velocity of 42 m/s while remaining aerodynamically stable. A finite element flow simulation in SolidWorks and analytical stability calculations are applied to ensure that these requirements are met. Analytical predictions are compared with experimental data from wind tunnel testing and two full-scale drop tests in Alaska. The penetrator must be thermally insulated so that internal electronics are kept within their operating temperature range without melting the surrounding ice. A COMSOL finite element heat transfer model is used to inform the design of thermal insulation for the system to meet these requirements.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Regional Rail: Strategies for Service Transformation on the Worcester/Framingham Line</title>
<link href="https://hdl.handle.net/1721.1/154182" rel="alternate"/>
<author>
<name>Wilkins, Devin Camille</name>
</author>
<id>https://hdl.handle.net/1721.1/154182</id>
<updated>2024-04-18T03:48:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Towards Regional Rail: Strategies for Service Transformation on the Worcester/Framingham Line
Wilkins, Devin Camille
Increasingly, the urgent threat of climate change has brought renewed focus to the efficiency of cities’ transportation networks and the benefits of mode shift away from the private automobile and towards transit. Over the last three years, changes in journey patterns resulting from the social impacts of the COVID-19 pandemic have triggered questions about the future of public transit systems, and whether changes to established service delivery strategies and fare products are needed. In many ways, commuter rail as a service delivery strategy feels like a relic of a past time, as miles of track and a fleet of train sets sit virtually idle for most of the day until peak weekday commute hours.&#13;
&#13;
The goal of this research is to explore the potential transformation of the Massachusetts Bay Transportation Authority (MBTA) Commuter Rail system into a so-called "regional rail" system. That is, a vast network of heavy rail that leverages its abundant track infrastructure to run high-frequency bi-directional service all day between major population centers in the region. The aim of regional rail service is to serve all members of society equally, not just white-collar commuters.&#13;
&#13;
The Worcester/Framingham line is used as a case study, which runs from Boston’s South Station through 44 miles of the Metro West corridor. Three post-pandemic demand scenarios are proposed and service simulation and schedule optimization tools are developed to generate 3 service plans and policy recommendations that promote increased passenger demand in the near future. This analysis culminates in the proposal of a four-part investment plan for infrastructure and service on the line, culminating in a high-frequency service that serves riders only within the urban core, acting as a "second subway" to augment Boston’s urban rail network.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Apparel Pack Sizes Across Retailer’s North America Network</title>
<link href="https://hdl.handle.net/1721.1/154180" rel="alternate"/>
<author>
<name>Teno, Jason</name>
</author>
<id>https://hdl.handle.net/1721.1/154180</id>
<updated>2024-04-18T03:21:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimizing Apparel Pack Sizes Across Retailer’s North America Network
Teno, Jason
Athletic apparel companies outsource apparel manufacturing to many factories that pack in varied sizes and quantities. Packaging is a critical, early step in retailers’ supply chains. Pack quantities impact downstream supply chain costs. Optimizing the relationship between pack quantities and downstream costs allows retailers to reduce unnecessary repackaging within their local distribution centers. This research created a discrete optimization model aimed to minimize distribution center costs as a function of pack sizes. As sales orders trend lower due to an increase in e-commerce sales, the optimization model suggested decreasing pack sizes to accommodate these trends and decrease the variation in pack sizes across product classifications. Immediate implementation would result in a 13.2% reduction in repackaging costs. After implementation, communication with customers to match sales orders to pack sizes would result in a 39.2% reduction in repackaging costs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ignition by Nanosecond Repetitively Pulsed Discharges</title>
<link href="https://hdl.handle.net/1721.1/154175" rel="alternate"/>
<author>
<name>Dijoud, Raphael J.</name>
</author>
<id>https://hdl.handle.net/1721.1/154175</id>
<updated>2024-04-18T03:42:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Ignition by Nanosecond Repetitively Pulsed Discharges
Dijoud, Raphael J.
Previous works have shown that nanosecond pulsed plasmas can have strong benefits on ignition, including a reduction of ignition delay times, a decrease of minimum ignition energies, or an extension of lean ignition limits. These effects are highly dependent on experimental conditions such as temperature, mixture, pulse repetition frequency, pulse energy, or discharge size. Therefore, a model allowing for parametric explorations is needed to separate the influence of each variable on plasma-assisted ignition. This work presents the development of both (i) a zero-dimensional (0D) chemical model for plasma-assisted combustion relevant for aircraft engine applications, and (ii) a one-dimensional (1D) radial fluid model of reacting flows describing radial ignition triggered by Nanosecond Repetitively Pulsed discharges (NRP).&#13;
&#13;
The models developed are used to explore the influence of various parameters in an optimization effort. Using the 0D model, the influence of initial gas temperature and energy deposited per pulse on the reduction of ignition delay time is analyzed. Various mixtures of fuel/oxygen/nitrogen are also explored, changing the equivalence ratio and dilution factor, and compared with an instantaneous pure thermal input from the discharge to quantify the chemical effect of the discharge. The 1D model is initially demonstrated in a scenario where no plasma is present, focusing on the ignition of a methane/air mixture by a high-temperature kernel. Additionally, a test case is presented, comparing different NRP ignition strategies. In this case, the total power budget of the discharge is maintained within a narrow range by adjusting the pulse repetition frequency inversely proportional to the square of the plasma region size. Different plasma kernel sizes and pulse repetition frequencies are explored, and their effect on ignition and flame propagation enhancement is discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Thermal Stability of Nanocrystalline Ag-Cu Alloys</title>
<link href="https://hdl.handle.net/1721.1/154174" rel="alternate"/>
<author>
<name>Sulzman, Serita L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154174</id>
<updated>2024-04-18T03:07:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Evaluating the Thermal Stability of Nanocrystalline Ag-Cu Alloys
Sulzman, Serita L.
Nanocrystalline alloys offer multitudinous advantages over their larger-grained counterparts including increased strength, hardness, resistance to fatigue, and more. However a significant barrier to their implementation is their low thermal stability—they are prone to coarsening at very low homologous temperatures. Luckily, a thermodynamic approach to stabilizing the microstructures of nanocrystalline metals by adding an alloying element shows great promise. Recent improvements in computational models have facilitated identification of alloy systems in which solute segregation to the grain boundaries is energetically favorable. However, more experimental validation is needed to verify whether their predictions can translate to enhanced thermal stability of alloys in practice. In this work, computational calculations of segregation energies and various processing considerations provided guidance for the selection of the silvercopper system for further study. Procedures were developed to synthesize chemically homogenous nanocrystalline Ag-Cu alloys, and heat treatments with in-situ X-ray diffraction were designed to evaluate their resistance to grain growth at increasing temperatures. Examination of the microstructures of the heat treated samples with focused ion beam and scanning electron microcopy corroborated Scherrer grain size calculations which showed that the alloys with 5 at.% and 25 at.% copper maintained much smaller equilibrium grain sizes at all temperatures in the scope of study compared to pure silver. As was computationally predicted, these data show that the addition of copper can improve the thermal stability of nanocrystalline silver. The experimental validation of these thermodynamic and other system selection criteria provides a framework for the development of novel thermally stable nanocrystalline alloys for countless engineering applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Role Of Repurposing Coal Plants to Thermal Energy Storage in the Context of India</title>
<link href="https://hdl.handle.net/1721.1/154168" rel="alternate"/>
<author>
<name>Patel, Serena Naresh</name>
</author>
<id>https://hdl.handle.net/1721.1/154168</id>
<updated>2024-04-17T03:27:42Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">The Role Of Repurposing Coal Plants to Thermal Energy Storage in the Context of India
Patel, Serena Naresh
Substantial coal phase out initiatives have been growing as the world mobilizes to meet the Paris climate goals. However, the stranded asset risk associated with this critical transition could fall disproportionately on Asian economies with younger coal fleets, like India. Here, we use a bottom-up and top-down techno-economic modeling approach to explore the value of installing commercially available, molten-salt thermal energy storage (TES) systems for repurposing existing coal power plants in the Indian context. We combine thermodynamic simulation and an economic optimization model to evaluate design and operations of TES systems for a variety of technology assumptions, coal plant archetypes, and electricity price scenarios. Key drivers of economic viability identified include longer remaining plant lifetime, increasing peak TES temperature, lower TES energy capacity cost, co-production of waste heat for end-uses, and increasing temporal variability of electricity prices. The plant-level analysis was then extended to screen for the potential for TES retrofits for the coal power fleet in Uttar Pradesh, the most populous Indian state with amongst the largest coal capacity. Analysis for a single electricity price scenario indicates that over 89% of the coal capacity in the state can be retrofitted and recover the costs of TES retrofits. Under the top-down, capacity expansion modeling approach, we find TES retrofits can save 3-6% in system costs in zero emission scenarios and operate as long-duration energy storage, complementing shorter-duration Li-ion based energy storage. Our results justify further investigation into articulating the value of repurposing coal plants from the interests and positions of different just energy transition stakeholders.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Doing the Dirty Work: Employment vulnerability to the energy transition and its implications for climate policy and politics</title>
<link href="https://hdl.handle.net/1721.1/154167" rel="alternate"/>
<author>
<name>Graham, Kailin</name>
</author>
<id>https://hdl.handle.net/1721.1/154167</id>
<updated>2024-04-17T03:42:54Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Doing the Dirty Work: Employment vulnerability to the energy transition and its implications for climate policy and politics
Graham, Kailin
As the world moves away from fossil fuels, there is growing recognition of the need for policy to support a just transition of those working in carbon-intensive industries. However, little work has thoroughly investigated which communities are most vulnerable to economic disruption in the energy transition and therefore require policy support. This thesis analyzes the distribution of employment vulnerability in the United States by calculating the average "employment carbon footprint" of close-to every job in the U.S. economy at high geographic and sectoral granularity. I find that existing efforts to identify at-risk communities both in the literature and the Inflation Reduction Act exclude regions of high employment vulnerability, and thereby risk leaving these communities behind in the energy transition. I also identify significant within-sector heterogeneity in employment carbon footprints that are unexplained by fuel mix or power grid carbon intensity, and find that carbon-intensive regions tend to be more rural, less racially and ethnically diverse, less educated, and more likely to vote Republican, and that these regions often lack institutional capacity to retrain laid-off workers. This thesis also uses these new data to empirically test the salience of employment impacts for political representatives. I find that legislators from districts with carbon-intensive employment are less likely to vote in favor of climate policy, while household carbon footprints have no effect despite being correlated with public opinion on climate action; I also note the significance of the partisan divide on climate voting. Altogether, this thesis argues that just transition policy is crucial to progress action on climate change by addressing politically salient employment impact concerns; underscores the importance of proactive and continuous measures of employment vulnerability in targeting such policy; provides policymakers with the much-needed data to do so; and makes the case that such policies should be place-based and tailored to the communities they strive to serve.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>BReach-LP: a Framework for Backward Reachability Analysis of Neural Feedback Loops</title>
<link href="https://hdl.handle.net/1721.1/154166" rel="alternate"/>
<author>
<name>Rober, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/154166</id>
<updated>2024-04-17T03:37:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">BReach-LP: a Framework for Backward Reachability Analysis of Neural Feedback Loops
Rober, Nicholas
Neural networks (NNs) can be used to solve a wide variety of robotics problems ranging from computer vision to control. However, while NNs often work well in nominal scenarios, their performance can decrease significantly in scenarios that they were not trained for. Thus, as we move toward real-world deployment of neural feedback loops (NFLs), i.e., closed-loop systems containing NNs, it is critical that we develop methods to verify that these systems are safe. Previous works have developed forward reachability techniques to verify safety for NFLs, but these techniques can be prohibitively conservative in non-convex settings such as obstacle avoidance. To enable safety verificaiton in non-convex settings, this thesis proposes BReach-LP: a set of techniques to conduct backward reachability analysis for NFLs. While backward reachability analysis has been studied for systems not containing NNs, the general noninvertability of NNs makes backward reachability analysis for NFLs a challenging problem. Thus, our approach leverages existing forward NN analysis tools to find affine bounds on the control inputs and solve a series of linear programs to efficiently find an approximation of the backprojection sets, i.e., the set of states for which an NN control policy will drive the system to a given target set. This thesis outlines four variations of BReach-LP, including proofs of their soundness and numerical results demonstrating their application.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Who, When, How (Not) to Imitate? The Role of Imitation in Collective Intelligence, and Its Implications on the Design of Socio-Technical Systems</title>
<link href="https://hdl.handle.net/1721.1/154165" rel="alternate"/>
<author>
<name>Choi, Eunseo Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/154165</id>
<updated>2024-04-17T03:50:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Who, When, How (Not) to Imitate? The Role of Imitation in Collective Intelligence, and Its Implications on the Design of Socio-Technical Systems
Choi, Eunseo Dana
Humans collectively demonstrate coordination and progress on a massive scale, building, adapting, and thriving under the rules of different institutions. Researchers posit social learning as a mechanism for overcoming individual limitations, quickly adapting to environments, passing knowledge across generations, and enabling rapid cumulative cultural evolution. This thesis demonstrates how multi-agent learning (MAL) can facilitate counterfactual experiments that shed light on the performance of different social learning. Simulations present that the details of who, when, and how to imitate affect group fitness in distinct ways based on the size and homogeneity of the group: 1. unbiased imitation works well in homogeneous groups as long as there is a minimum age for agents to be imitated; 2. imitation strategies based on models’ complete action history instead of their recent actions, although similar, can attain very different levels of group fitness; 3. very high levels of imitation probability (up to 98% in some cases) may be efficient for group learning. Results from this thesis complement and contradict accepted results from the literature. By explicitly comparing the mechanisms that govern the success or failure of group learning, findings from multi-agent learning can provide essential guidance for the design of socio-technical systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mapping the Electrodialysis Architecture Design Space by Determining Optimal System Configurations for Different Production Outputs</title>
<link href="https://hdl.handle.net/1721.1/154160" rel="alternate"/>
<author>
<name>Tran, Jimmy</name>
</author>
<id>https://hdl.handle.net/1721.1/154160</id>
<updated>2024-04-17T03:34:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mapping the Electrodialysis Architecture Design Space by Determining Optimal System Configurations for Different Production Outputs
Tran, Jimmy
Water scarcity is increasing around the world, and it especially affects remote, resource-constrained communities. Many communities with the highest water stress also live in close proximity to slightly saline water sources, while having abundant solar irradiance. Photovoltaic-powered electrodialysis reversal (PV-EDR) systems have been shown to produce water more cost-effectively and energetically efficiently than other desalination technologies. The goal of this work is to establish a framework for designing and optimizing PV-EDR systems for designers to develop low-cost systems that desalinate brackish water in remote, resource-constrained communities of various sizes around the world. By using this framework, the most cost-effective architecture that produces water across a large range of production volumes at the lowest cost can be identified. To potentially produce water more effectively at larger production volumes using variable power, a new architecture was proposed and explored called hybrid operation that utilizes the benefits from both continuous and batch operation. Additionally, this framework can also be used to identify the most cost-effective strategy for employing batteries and managing the energy stored versus used for desalination. Optimizing EDR systems that minimize the capital cost while maximizing their production volume across the design space including different architectures (batch, continuous, hybrid), energy management strategies (predictive, non-predictive, no batteries), feed salinities (100-500 mg/L), target salinities (1000-4000 mg/L), and recovery ratios (50%-90%) allows us to identify the most cost-effective EDR systems designs across a range of production volumes. By comparing the EDR systems designs across the design space, we can identify when each architecture and energy management strategy could be employed. Below 15 m^3 of water production per day, batch systems should be employed over hybrid systems. If users are not sensitive to salinity changes throughout the day, continuous systems should be used when producing more than 65 m^3 of water per day. Conversely, if users are sensitive to salinity changes, or a large buffer volume like a reservoir or pond is not available, hybrid systems should be used when producing more than 80 m^3 of water production per day. Between these production volume thresholds, the specific target salinity, feed salinity and recovery ratio can be used to inform which architecture to use. Incorporating a battery into a PV-EDR system can lower the capital cost of the system by approximately 12.3% for systems that produce between 10 and 100 m^3 of water per day, while producing the same amount of water as a similar EDR system without a battery.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Portable Device to Detect Per- and Polyfluoroalkyl Substances (PFAS) in Water</title>
<link href="https://hdl.handle.net/1721.1/154159" rel="alternate"/>
<author>
<name>Benner, Tioga</name>
</author>
<id>https://hdl.handle.net/1721.1/154159</id>
<updated>2024-04-17T03:01:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design of a Portable Device to Detect Per- and Polyfluoroalkyl Substances (PFAS) in Water
Benner, Tioga
PFAS or per- and polyfluoroalkyl substances are a group of man made chemicals used since the 1940s. The chemicals are highly stable and build up in the environment and organic systems. They can also cause several health problems in humans at high concentrations causing them to be a chemical of significant concern given their ubiquity. This article details the engineering research efforts in creating an initial design and prototype for a portable PFAS testing device to be used in the field for long term PFAS measurement of both drinking and groundwater. This research will also provide an initial validation and derisking for this PFAS measurement system and is intended to be used as a starting point for an eventual effort to create a marketable PFAS testing device by the project sponsor Xylem Corporation. The measurement of PFAS is made possible by a polymer developed by members of Tim Swager’s lab in MIT that has a fluorescence quenching response in the presence of PFAS.&#13;
&#13;
Multiple initial concepts were created and rated on a variety of different factors to find a final fluidic system design that would be most effective for this use case. The final design uses a needle based system inserting microliter scale sample fluids into a cartridge of many single use microwells. The microwells are multilayered devices designed to not interact with PFAS that can be easily integrated together into a cartridge with more than a hundred individual microwells fitting in a single 25 X 25 mm2 sheet allowing many tests to be done before requiring manual replacement of the cartridge. The cartridges can be easily removed and replaced for ease of use in the field. This design simplifies production of the device, can be easily automated and can fit within a conventional backpack for easy transport fulfilling the goals set out by Xylem for the final device.&#13;
&#13;
This article also discusses initial experiments into polymer validation showing potential methods of differentiating different types of PFAS and with tests of polymer sensitivity using fluorescence images. These experimens found limits of detection close to 0.1 ppb and there are multiple promising ideas to improve that sensitivity, which are being pursued through experimentation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Distributions: Invariance Principles &amp; Mismatched Guesswork</title>
<link href="https://hdl.handle.net/1721.1/154158" rel="alternate"/>
<author>
<name>Mariona, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/154158</id>
<updated>2024-04-17T03:46:50Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Comparing Distributions: Invariance Principles &amp; Mismatched Guesswork
Mariona, Alexander
We study two different ways of measuring the similarity between distributions over a finite alphabet. The first is an invariance principle which gives a quantitative bound on the expected difference between general functions of two finite sequences of random variables. This result is one way to generalize the foundational basic invariance principle to a particular multivariate setting. The second framework is based on guesswork, which is one way to measure the randomness of a distribution, similar to but notably distinct from the Shannon entropy. Given a bound on the total variation distance between two finite distributions, we give a bound on the difference in guesswork between those distributions and study the geometrical properties of the problem in the non-asymptotic setting.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bi-Level Belief Space Search for Assembly Tasks</title>
<link href="https://hdl.handle.net/1721.1/154156" rel="alternate"/>
<author>
<name>Chintalapudi, Sahit</name>
</author>
<id>https://hdl.handle.net/1721.1/154156</id>
<updated>2024-04-17T03:01:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bi-Level Belief Space Search for Assembly Tasks
Chintalapudi, Sahit
Contact-rich manipulation tasks, such as assembly, require a robot to reason about both the geometric relationship between parts as well as the dynamical relationship between the forces the robot exerts and the motion of the parts. The application of forces enables the robot to reduce its uncertainty by purposefully contacting the environment, a crucial skill in real-world domains where state is not fully observed. In this thesis, a planner is introduced that reasons over both gripper poses and joint stiffnesses, trading off motion generation to reach an objective and force production to manage uncertainty. Our planner performs a greedy optimization over stiffness and learns a model of the relationship between control output and goal achievement to bias the pose search. This planner is validated on a peg-in-hole insertion task in simulation and the real world and a puzzle assembly task in simulation. We measure the effects of solving for stiffnesses and generating robust gripper poses in terms of the uncertainty our planner can address.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Novel Device for the Treatment of Obstructive Sleep Apnea</title>
<link href="https://hdl.handle.net/1721.1/154155" rel="alternate"/>
<author>
<name>Gao, Qiyun</name>
</author>
<id>https://hdl.handle.net/1721.1/154155</id>
<updated>2024-04-17T04:03:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Novel Device for the Treatment of Obstructive Sleep Apnea
Gao, Qiyun
In this thesis, a novel device for the treatment of Obstructive Sleep Apnea (OSA) is developed and tested. This device uses intra-oral suction to stabilize tongue and/or soft palate in a position that does not obstruct the airway, thus reduce apnea episodes. The treatment device consists of a patient-specific oral device and a non-patient-specific pump unit. Patients wear the oral device on their upper palate that directs suction towards tongue and/or soft palate. A length of tubing connects the oral device to the pump unit, which is placed bedside and is envisioned to be a wearable device in further iterations.&#13;
&#13;
Experimental results from a small-scale clinical trial verified that the device performs its intended function of stabilizing the tongue, and does not cause increase in Apnea Hypopnea Index (AHI) in healthy volunteers. MRI Imaging on volunteers wearing the device proved the device does enlarge the airway by 60% - 80%. A finite element model of the tongue, soft palate and airway, with muscle fiber direction derived from Diffusion Tensor MRI, is implemented as a proof of concept that the device can treat OSA. The estimation of the level of vacuum required to stabilize the tongue by a finite element (FE) model is consistent with experimental results.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-vector Energy Systems Analysis for Heavy-duty Transportation Deep Decarbonization Using H₂ and Synthetic Fuels</title>
<link href="https://hdl.handle.net/1721.1/154152" rel="alternate"/>
<author>
<name>Shaker, Youssef H.</name>
</author>
<id>https://hdl.handle.net/1721.1/154152</id>
<updated>2024-04-17T03:58:25Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Multi-vector Energy Systems Analysis for Heavy-duty Transportation Deep Decarbonization Using H₂ and Synthetic Fuels
Shaker, Youssef H.
Policies focused on deep decarbonization of regional economies tend to emphasize electricity sector decarbonization in conjunction with electrification of end-uses and increasingly, on the use of hydrogen (H₂) produced via electricity for displacing fossil fuels in difficult-toelectrify sectors. One such use case is heavy-duty transport, which represents a substantial and growing share of global transport sector emissions given the increasing electrification of the light duty vehicle fleet. Here, we assess the bulk energy system impact of decarbonizing the heavy-duty vehicle (HDV) segment via use of either H₂ or drop-in synthetic liquid fuels produced from H₂ along with CO₂. Our analysis relies on soft-linking two modeling approaches: a) a bottom-up model of transportation energy demand that produces a variety of final energy demand scenarios for the same service demand and b) a multi-sectoral capacity expansion model, DOLPHYN, that co-optimizes power, H₂ and CO₂ supply chains subject to a variety of technological and policy constraints to meet the exogeneous final energy demand slate. Through a case study of Western European countries under deep decarbonization constraints for the year 2040, we quantify the energy system implications of varying levels of H₂ and synthetic fuels adoption in HDVs, under scenarios with and without CO₂ sequestration capacity availability. We find that substitution of liquid fossil fuels in the HDV segment is essential to meet the imposed deep decarbonization constraint across the modeled power, H₂, and transport sectors, particularly in the absence of CO₂ storage. Additionally, we find that utilizing H₂ HDVs reduces bulk system costs of deep decarbonization, while reducing fossil liquids demand, but could increase natural gas consumption in cases. While H₂ HDV adoption reduces the need for direct air capture (DAC), synthetic fuel adoption results in a greater need for DAC and also leads to system cost increases compared to scenarios without their adoption. The study highlights the trade-offs associated with different transportation decarbonization pathways, and underlines the importance of multi-sectoral consideration in decarbonization studies.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Intracellular sensor spatial multiplexing via RNA scaffolds</title>
<link href="https://hdl.handle.net/1721.1/154122" rel="alternate"/>
<author>
<name>Johnson, Shannon L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154122</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">Intracellular sensor spatial multiplexing via RNA scaffolds
Johnson, Shannon L.
To circumvent the limitations of spectrally multiplexing sensors, fluorescent sensors are clustered by type and spatially separated in the cytoplasm to avoid cross-talk. Each sensor is fused to an orthogonal viral capsid protein that binds to a long, repetitive strand of its corresponding RNA sequence. All sensors fluoresce green and are indistinguishable during recording but are identified with post-hoc antibody or FISH staining for each sensor-specific puncta. This spatial multiplexing strategy will allow for easier scaling of the number of fluorescent reporters of physiological activity.
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 38-40).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>[mu]Jawstures : jaw-teeth microgestures for discreet hands-and-eyes-free mobile device interaction</title>
<link href="https://hdl.handle.net/1721.1/154119" rel="alternate"/>
<author>
<name>Vega Gálvez, Tomás Alfonso.</name>
</author>
<id>https://hdl.handle.net/1721.1/154119</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">[mu]Jawstures : jaw-teeth microgestures for discreet hands-and-eyes-free mobile device interaction
Vega Gálvez, Tomás Alfonso.
We often perform activities that situationally impair us, decreasing our ability to interact with mobile devices when needed. These impairments manifest physically, by preventing us from using our hands and eyes when already being devoted to other ongoing processes (i.e., biking, driving, etc), and socially, by making certain interaction modalities inappropriate given social norms, etiquette, and rules of engagement. Researchers have investigated using jaw and teeth microgestures as a discreet hands-and- eyes-free solution for mobile device interaction while situationally impaired. However, an opportunity remains to investigate ways to wirelessly and unobtrusively sense these gestures, and further explore and evaluate the design space for jaw and teeth microgestures in the context of general-purpose Human Computer Interaction. This thesis makes four major contributions to the exploration of jaw and teeth microgestures. Through an iterative prototyping process, the work contributes attachable, miniaturized, wireless sensor nodes that are placed bilaterally behind the ears to unobtrusively sense jaw-teeth microgestures with 88% accuracy in a stationary context. The thesis also presents a hyper-personalized mobile application that permits training jaw-teeth gestures and mapping them to mobile device commands. The work further contributes a universal teeth contact and jaw-teeth gesture taxonomy, which is evaluated for its comfort and usability. Finally, it contributes an exploration of the potential use cases of jaw-teeth-gesture-based mobile device interaction.
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; [mu] appeared in title on title page appears as lower case Greek letter. Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 157-166).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High throughput single molecule in situ-verified nucleic acid synthesis</title>
<link href="https://hdl.handle.net/1721.1/154118" rel="alternate"/>
<author>
<name>Griswold, Kettner J. F.,
            Jr.</name>
</author>
<id>https://hdl.handle.net/1721.1/154118</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>2019-01-01T00:00:00Z</published>
<summary type="text">High throughput single molecule in situ-verified nucleic acid synthesis
Griswold, Kettner J. F.,
            Jr.
Synthetic biology is a burgeoning field with applications in medicine, agriculture, chemistry, and other fields. Synthetic biology aims to rationally engineer novel functionality into organisms, from the molecular level to whole genome scale. As an engineering discipline, synthetic biology development follows a canonical design-build-test cycle. In a typical workflow, designs are generated in computer programs, and specified at the DNA level. Subsequently, DNA encoding the design must be built to specification and tested for desired functionality in vivo or in vitro. In current practice, building DNA, by de novo DNA synthesis and related methods, is a rate limiting and costly bottleneck for researchers. State of the art de novo DNA Synthesis technologies, are trial-and-error, nondeterministic processes where turnaround times for specified DNA range on the order of weeks, and cost up to several thousand dollars per gene, multigene order. Of the many challenges inherent to building novel DNA sequences is the occurrence of truncation errors (failure to extend), and damaging side reactions during synthesis of short DNA oligonucleotide (100bp) precursors used in DNA assembly. There are also challenges in assembling oligonucleotides due to the tendency of DNA to form secondary structures and undesired annealing products during assembly reactions. Consequently, DNA synthesis companies spend upwards of 80 percent of manufacturing time sequencing thousands of DNA assemblies until a correct DNA assembly is found. This thesis describes a method for rapid, scalable, de novo DNA synthesis embodied as highly parallelized single molecule enzymatic synthesis of 10KB sequences with real time in situ sequence verification.
Thesis: S.M., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2019; Cataloged from the official PDF of thesis.; Includes bibliographical references (pages 42-43).
</summary>
<dc:date>2019-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A phase-sensitive system for measuring acoustic pressure in an impedance tube</title>
<link href="https://hdl.handle.net/1721.1/154112" rel="alternate"/>
<author>
<name>Cavalieri, Albert L.</name>
</author>
<id>https://hdl.handle.net/1721.1/154112</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1949-01-01T00:00:00Z</published>
<summary type="text">A phase-sensitive system for measuring acoustic pressure in an impedance tube
Cavalieri, Albert L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1949; Bibliography: leaf [19].
</summary>
<dc:date>1949-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Headquarters, Army of the Philippines at Camp Murphy, Manila</title>
<link href="https://hdl.handle.net/1721.1/154110" rel="alternate"/>
<author>
<name>Arguelles y Corcuera, Carlos Domingo.</name>
</author>
<id>https://hdl.handle.net/1721.1/154110</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Headquarters, Army of the Philippines at Camp Murphy, Manila
Arguelles y Corcuera, Carlos Domingo.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1946; Bibliography: leaves 107-108.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Yard unreliability in rail freight movement.</title>
<link href="https://hdl.handle.net/1721.1/154109" rel="alternate"/>
<author>
<name>Reid, Robert Malcolm.</name>
</author>
<id>https://hdl.handle.net/1721.1/154109</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">Yard unreliability in rail freight movement.
Reid, Robert Malcolm.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1971
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deciphering Hydrological Responses: Elastic and Poroelastic Behavior Through GPS Temporal Analysis</title>
<link href="https://hdl.handle.net/1721.1/154036" rel="alternate"/>
<author>
<name>Sandoe, Lucy A.</name>
</author>
<id>https://hdl.handle.net/1721.1/154036</id>
<updated>2024-04-03T03:22:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Deciphering Hydrological Responses: Elastic and Poroelastic Behavior Through GPS Temporal Analysis
Sandoe, Lucy A.
Hydrologic features, such as lakes and reservoirs, load the surface of the earth which causes measurable deformation. As the surface is loaded, there is a vertical and horizontal deflection, with reference points moving downwards and towards a load as the surface is depressed. The horizontal and vertical deformation from reservoir loading can be seen in Global Positioning System (GPS) data. In the first chapter of this thesis, we use 18 years of data from GPS sites across Northern California, and we invert for the loads associated with different hydrologic regions on a finer scale than previous studies. We take a novel approach to regularization: the inversion is performed using the vertical components of deformation, but we regularize using the misfits of the horizontal components of deformation which are semi-independent of the signal used in the inversion, thus avoiding overfitting noisy signals or over smoothing sharp features. We validate the inversion on Lake Shasta, a large, confined reservoir with known capacities, before performing a preliminary study of the Northern Sierras, Klamath Mountains, and Black Rock Desert. By robustly inverting remote sensing data for hydrologic mass, we provide insights on water storage budgets on a reservoir-scale across a critical and drought-prone region.&#13;
&#13;
However, there are some regions which can exhibit a porous or poroelastic response to surface water loading. In these areas, the subsurface can expand with the introduction of water; either by the filling of pore spaces or inflation of subsurface reservoirs. This process has the opposite sign of elastic loading, can have temporal delays, and is often nonlinearly recoverable. In Chapter 2, in order to accurately understand and quantify the effects of water loading, we use the degree of correlation between the modeled hydrology at each site and the actual GPS station timeseries, then extrapolate spatially, finding areas of higher and lower correlation with elastic deformation across the Western United States. We also study the dates of peak seasonal amplitude throughout the region. These factors will determine which regions can be modeled with elastic loading, which need a more complex poroelastic model, and which may have some hydrologic delay. The classification will also inform the relative drought resiliency of different regions by highlighting areas where an influx of water may have a delayed impact on reservoir recovery.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Model of a Floating Nuclear System for Hydrogen and Ammonia Production</title>
<link href="https://hdl.handle.net/1721.1/154032" rel="alternate"/>
<author>
<name>Won, Hanna</name>
</author>
<id>https://hdl.handle.net/1721.1/154032</id>
<updated>2024-04-03T03:40:25Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Integrated Model of a Floating Nuclear System for Hydrogen and Ammonia Production
Won, Hanna
Hydrogen’s role as a potential substitute for fossil fuels is expanding, offering significant potential to lower carbon emissions in critical sectors. Similarly, ammonia is stepping into the spotlight as both a substitute for conventional fuels in heavy industries and as an efficient hydrogen carrier. Nuclear Power Plants (NPPs) play a critical role in this scenario, offering a steady supply of low-carbon energy. This thesis explores the economic and environmental viability of an innovative marine-based facility for generating green hydrogen and ammo- nia using nuclear reactors. It analyzes various designs of this integrated floating platform, assessing their economic and environmental benefits, particularly focusing on enhancing op- erational flexibility and increasing the platform’s value. This includes selling electricity to the grid at times of peak electricity prices. The system optimizes operations by storing excess hydrogen during normal operation, ensuring continuous ammonia production during peak electricity hours. The research investigates diverse NPPs and electrolysis configurations and assesses their collective efficiency in hydrogen and ammonia production. The result of the study identifies the most effective NPP-electrolysis combination and understands how inte- grating ammonia synthesis can enhance the overall hydrogen production process from NPPs. Ammonia production generates excess heat that can be used to reduce external energy in- puts into hydrogen production. Therefore, a holistic approach to the system—including the reactor, hydrogen, and ammonia production—must be considered to minimize costs.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Approach to Investigate Environmental Footprint and Cost Tradeoffs in Additive Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/154031" rel="alternate"/>
<author>
<name>Midrez, Noemie</name>
</author>
<id>https://hdl.handle.net/1721.1/154031</id>
<updated>2024-04-03T03:58:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">System Approach to Investigate Environmental Footprint and Cost Tradeoffs in Additive Manufacturing
Midrez, Noemie
As additive manufacturing (AM) continues to grow and show potential for efficient resource utilization and product lifecycle, it represents a promising technology for the green industrial transformation needed to achieve Net Zero Emissions by 2050. However, the environmental impact of AM remains unclear, given its diverse applications and the historical emphasis on cost and quality as primary adoption drivers. Pressured by climate change, AM manufacturers lack quantitative tools to balance the technology’s complexity, environmental impact, and economic value. &#13;
&#13;
This thesis demonstrates the use of system modeling methodologies to help AM manufacturers navigate these tradeoffs and make data-driven decisions to scale their service. After exploring the policy landscape impacting manufacturing and reviewing the latest developments in AM cost modeling and environmental impact assessment, a case study on an AM service unit in the sporting goods industry is used to illustrate the methodologies. A tradespace analysis compares the value of HP’s MultiJet Fusion technology to injection molding (IM) across various product characteristics and lifecycle decisions, and a flexible design analysis evaluates various investment decisions, considering uncertainties from the market and technology. &#13;
&#13;
For the case studied (and assumptions used), the tradespace analysis reveals a 75% lower environmental footprint (EF) per part using AM compared to IM, while IM yields a 97% unit cost saving. Maximizing build capacity with small, uniform parts in locations with low-footprint energy increases AM’s economic and environmental value, suggesting that opposite product attributes and lifecycle decisions constitute development areas. The flexible design analysis, conducted for the specific AM service unit, shows that transitioning with added capacity to a larger rental facility with solar panels yields a 37% lower EF than maintaining current operations, and waiting to move to the larger facility until the demand aligns with added capacity generate a 96-137% increased NPV. These trends lead to the recommendation to transition the existing capacity to a larger rental facility with solar panels and wait for increased demand to invest in additional capacity.  &#13;
&#13;
These insights affirm the effectiveness of system modeling methodologies in guiding AM service providers by balancing financial and environmental factors. By introducing the application of these techniques in the AM context, this study establishes a baseline and identifies gaps to bridge for improved model accuracy. The approach developed in this work can be applied to different cases to quantitatively explore strategic options for technology investment and scaling to meet financial and environmental sustainability goals.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cloud-Native Applications and Their Role in Supporting Agile Hardware Development</title>
<link href="https://hdl.handle.net/1721.1/154030" rel="alternate"/>
<author>
<name>Herrera, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/154030</id>
<updated>2024-04-03T03:01:35Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Cloud-Native Applications and Their Role in Supporting Agile Hardware Development
Herrera, Brian
Agile product development focuses on collaboration, iterative development, and responsiveness to change as a mindset and methodology for project teams. Agile has been instrumental in software development and improving overall project outcomes for software teams. Agile has recently been introduced to hardware teams, given the benefits experienced with software teams. While Agile for hardware is still in its infancy, there are many aspects of cloud-based applications (e.g., Jira, Microsoft 365, Zoom, Miro, Google Docs, etc.) that are enabling the use of Agile in hardware development. In this research, we explore how cloud-based applications support Agile development for hardware teams. We reviewed existing frameworks and interviewed nine individuals from eight different organizations. We learned that hardware teams are complex and require a high level of coordination between its team members. Cloud-based applications support Agile project teams through collaboration, speed of iteration, flexibility, and alignment. When utilizing these applications, experienced practitioners consider their organizational structure, the team's physical location, and interdependencies with other groups. While cloud-based applications provide several benefits to project teams, we suggest they adapt these tools to fit their specific needs. Future development and integration of these tools may help reduce the number of total applications used to streamline the coordination process and reduce the overhead of tools.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Lean Technology Roadmapping: Assessing the Value Path of Existing Approaches and Exploring Process Improvements</title>
<link href="https://hdl.handle.net/1721.1/154028" rel="alternate"/>
<author>
<name>Villegas, David</name>
</author>
<id>https://hdl.handle.net/1721.1/154028</id>
<updated>2024-04-03T03:45:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Lean Technology Roadmapping: Assessing the Value Path of Existing Approaches and Exploring Process Improvements
Villegas, David
For the past half a century, the practice of Technology Roadmapping (TRM) has been invaluable in helping companies align technology initiatives with business strategies. However, its successful implementation often requires significant investment, representing a challenge for companies with limited resources, especially start-ups. This study aims to understand how various roadmapping methods differ regarding value delivery and explores ways to optimize initial investments in TRM to maximize their value. To achieve this objective, this thesis integrates theoretical insights from analyzing established methods with practical perspectives from a case study. The analysis portion of the research models roadmapping as a system and dissects the value delivery mechanism of two different TRM methods. The case study examines the experimental roadmapping process implemented at a technology-intensive energy start-up in a real-world setting.  The analysis component of the study concluded that, while both methods aim to align strategic priorities with technology initiatives, they differ in their approach: one relies on verbal communication and facilitation, and the other employs equations and models to rationalize R&amp;D project priorities quantitatively. An estimated investment of approximately 200 hours is considered sufficient to derive initial value from either method. Results from the case study showed that it is feasible to produce an initial roadmap within a start-up environment with an investment of approximately 100 man-hours, depending on the scope and complexity of the roadmap. This streamlined approach primarily enhances cross-functional communication as its key benefit and produces a simple visual roadmap using existing company documentation.  The findings from this research can assist companies in aligning their investments more effectively with their roadmapping needs and setting realistic expectations about the required resource investments to achieve certain minimum benefits from TRM. The case study provides insights into the application technology roadmapping within a start-up, highlighting practical challenges, areas of improvement, and potential for generalization.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>GALiCA: A Gestural Approach to Live Coding Algorithms</title>
<link href="https://hdl.handle.net/1721.1/154027" rel="alternate"/>
<author>
<name>Savoldy, Lark</name>
</author>
<id>https://hdl.handle.net/1721.1/154027</id>
<updated>2024-04-03T03:03:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">GALiCA: A Gestural Approach to Live Coding Algorithms
Savoldy, Lark
Live coding is an electronic music performance practice in which performers generate music and visuals in real time by writing code. The cognitive approach to live coding differs greatly from that of gestural music, in which performers leverage extensive embodied knowledge of their instrument. These two domains, which each provide unique tools for musical creativity and expressivity, are often performed separately.&#13;
&#13;
This thesis considers the space between these two performance styles. The primary goal is to suggest the potential of a combined modality by considering techniques for gestural control over live code. A combination of live coding and gestural performance may allow for a new cognitive approach and entirely new ways to live code.&#13;
&#13;
To explore this idea, this thesis introduces GALiCA, a live coding system that implements four techniques for manipulating code through gestural interaction with a MIDI controller. These techniques are facilitated by a flexible sequencer conceptualization that allows for easy modification. Additionally, to guide the analyses, this thesis synthesizes existing conceptual perspectives on the cognition involved in gestural performance and live coding. The promising results and analyses of these techniques may encourage further exploration into this new field and prompt new cognitive approaches to electronic music performance.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Implementing Modular Nuclear Reactor Systems for Developing Countries : A framework for capturing the value potential of modular nuclear reactor systems and their deployment in developing countries</title>
<link href="https://hdl.handle.net/1721.1/154023" rel="alternate"/>
<author>
<name>Sibanda, Leroy Kudakwashe</name>
</author>
<id>https://hdl.handle.net/1721.1/154023</id>
<updated>2024-04-03T03:01:48Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Towards Implementing Modular Nuclear Reactor Systems for Developing Countries : A framework for capturing the value potential of modular nuclear reactor systems and their deployment in developing countries
Sibanda, Leroy Kudakwashe
Carbon-conscious energy production is an increasingly global concern, especially as countries reckon with the effects of climate change and their respective contributions to the problem. While developing countries contribute significantly lower amounts to global carbon emissions when compared to developed countries (sometimes orders of magnitude less per capita), there is growing consensus amongst energy leaders in these countries that they need not replicate the damaging levels of carbon emission to fuel energy needs required for economic growth. Many developing countries have already established significant renewable energy programs, but there is a need to supplement this intermittent energy source with one that is more stable. Nuclear energy is widely accepted as a carbon-conscious energy source, and has allowed many developed countries to make the switch to clean energy. It presents the opportunity for developing countries to start off with carbon-conscious energy production, but the prohibitive upfront cost of nuclear power plants among other challenges means adoption remains slow and often faces significant opposition.&#13;
&#13;
This study explores modular nuclear reactor systems, as a solution to the challenges of building and financing nuclear power plants in Africa, as a proxy for developing countries. The result is a framework for implementation of modular nuclear reactor systems, with considerations for cost, safety, technology and electric grid development, among other factors, all from the perspective of developing countries in Africa. Special consideration is given to communicating the value of this framework based on the interests of developing countries.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessment of Hybrid CFD Turbulence Model, STRUCT-epsilon, for Thermal Striping Behavior</title>
<link href="https://hdl.handle.net/1721.1/154022" rel="alternate"/>
<author>
<name>Vaughan, Brendan Conor</name>
</author>
<id>https://hdl.handle.net/1721.1/154022</id>
<updated>2024-04-03T03:30:28Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Assessment of Hybrid CFD Turbulence Model, STRUCT-epsilon, for Thermal Striping Behavior
Vaughan, Brendan Conor
Many advanced nuclear reactor designs are susceptible to thermal fatigue damage caused by thermal striping, which presently accepted modeling and design tools are unable to accurately or reliably predict the presence of. Advanced reactors are vital in achieving net-zero carbon electricity production and thus developing design tools that can predict thermal striping is essential. Any new design tool used in the nuclear industry must be validated against experimental data sets to ensure that results predicted by these methods are sufficiently accurate. The STRUCT-epsilon Computational Fluid Dynamics model was used to aid the development of a dedicated thermal striping experiment that will later be used to help validate the STRUCT-epsilon model's capabilities.&#13;
&#13;
The STRUCT-epsilon model provided the ability to conduct turbulence resolving simulations at a speed conducive to rapid iteration of the design of the DESTROJER test facility. To further increase confidence in the STRUCT-epsilon model's applicability to the test cases, two LES runs were completed and demonstrate STRUCT-epsilon's ability to capture flow unsteadiness. However, in both test cases the STRUCT-epsilon model exaggerates the behavior seen in the LES runs; over predicting temperature oscillations in one case and the flow asymmetry in the other. The STRUCT-epsilon model's potential to predict asymmetric configurations provides promising further applications of the model. Future studies of STRUCT-epsilon should seek to better understand the model's performance in asymmetric flow cases to further support experimental design and the assessment of complex operating configurations.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Logs to Causal Analysis: A Guided User Interface for Causal Graph Discovery</title>
<link href="https://hdl.handle.net/1721.1/154021" rel="alternate"/>
<author>
<name>Gao, Trinity</name>
</author>
<id>https://hdl.handle.net/1721.1/154021</id>
<updated>2024-04-03T03:18:06Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">From Logs to Causal Analysis: A Guided User Interface for Causal Graph Discovery
Gao, Trinity
In a world full of digital systems, logs are found everywhere. From distributed systems logging network events to stock exchanges logging transactions, preserving information in logs is a widely-used practice. Our group’s hope is that logs can preserve events and system states at various points in time, which can later be leveraged to answer causal questions about the system. However, analyzing logs is currently far from a smooth experience. Some system dynamics might only be partially captured by log variables, while others are drowned out by the sheer volume of uninteresting, "common-case" log-lines. It is not always possible to require the logging format to match our analysis, since most systems rely on infrastructure code and libraries that cannot be altered directly. We would also be throwing away a considerable amount of existing logs. An existing system, Sawmill, is able to parse and process log data in order to answer causal questions. Sawmill’s main functionalities include presenting the user with candidate answers to causal questions, and relies on user input to accept or reject them. Doing this iteratively allows a user to build up a causal graph for a system’s logs. However, the user currently has no way to verify Sawmill’s answers. So if a user incorrectly accepts or rejects an edge representing a causal relationship based off of Sawmill’s answers on average treatment effect (ATE), this will be integrated into the user’s causal graph and can cause even more errors further down the line. In this master’s thesis, we extend Sawmill’s capability by identifying and presenting key assumptions which greatly impact Sawmill’s answer to a causal question. The existence or non-existence of these assumptions informs the user about possible different states of the causal graph, providing more context about the log and ultimately allowing the user to be more confident in drawing causal conclusions. This also mitigates cascading effect of a single error in the construction of a causal graph. Importantly, we continue to leverage the user’s knowledge about the log, relying on their ability to accept and reject assumptions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Naval Ship Systems Power and Energy Metrics through Modeling and Analysis</title>
<link href="https://hdl.handle.net/1721.1/154019" rel="alternate"/>
<author>
<name>Platenberg, Drake</name>
</author>
<id>https://hdl.handle.net/1721.1/154019</id>
<updated>2024-04-03T03:16:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Characterizing Naval Ship Systems Power and Energy Metrics through Modeling and Analysis
Platenberg, Drake
This research introduces a framework for analyzing shipboard power and energy systems as a repeatable process to differentiate between preferred solutions within a design tradespace. The Naval design community needs a consistent method for evaluating non-functional requirements, called “ilities,” in the early design stages when informed decision making provides the greatest opportunity to positively influence the system’s performance and lifecycle cost. Ilities are defined as emergent properties that impact a system’s ability to maintain value over time. The pace of technology maturation and the uncertainty in magnitude and characteristics of future load types drive the need for robust power and energy system architectures that can adapt to future perturbations in requirements. This research proposes a framework for developing metrics that can be used to identify preferred options with the design space. The framework considers the physical, logical, and operational aspects of the architecture to generate a set of perturbations that are likely to impact the system’s ability to maintain value over its lifecycle. The proposed process is exercised to develop quantitative, measurable metrics for Naval power and energy system flexibility: the capability of the system to accommodate change in response to perturbations in requirements. Four case studies are presented, developing metrics for Flexible Power Capacity, Debitable Power Flexibility, Distributable Power Flexibility, and Energy Storage Flexibility. A fifth case presents the application of Real Options Analysis for balancing system performance and cost to “right size” the P&amp;E system at initial delivery with preparations in the design to react to future uncertainty.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Impact of AI Value Alignment in Collaborative Ideation: Effects on Perception, Ownership, and Output</title>
<link href="https://hdl.handle.net/1721.1/154018" rel="alternate"/>
<author>
<name>Guo, Alicia</name>
</author>
<id>https://hdl.handle.net/1721.1/154018</id>
<updated>2024-04-03T03:57:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Exploring the Impact of AI Value Alignment in Collaborative Ideation: Effects on Perception, Ownership, and Output
Guo, Alicia
AI-based virtual assistants are increasingly used to support daily ideation tasks. The values or bias present in these agents can influence output in hidden ways. They may also affect how people perceive the ideas produced with AI agents of different value alignments and lead to implications for the design of AI-based tools. We explored the effects of AI agents with different values on the ideation process and user perception of idea quality, ownership, agent competence, and values present in the output. Our study tasked 180 participants with brainstorming practical solutions to a set of problems with AI agents of different values. Results show no significant difference in self-evaluation based on value alignment; however, the ideas generated in the brainstormig process reflected the AI’s values. This thesis highlights an intricate interplay between AI values and human ideation, suggesting careful design considerations for future AI-supported brainstorming tools.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Filling the Gaps – Exploring the Scope of Arts-Based Education in Jodhpur</title>
<link href="https://hdl.handle.net/1721.1/154017" rel="alternate"/>
<author>
<name>Mridul, Ashmi</name>
</author>
<id>https://hdl.handle.net/1721.1/154017</id>
<updated>2024-04-03T03:52:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Filling the Gaps – Exploring the Scope of Arts-Based Education in Jodhpur
Mridul, Ashmi
This thesis reports findings of an action-research project exploring the scope of arts-based education to fill the gap of local knowledge in schools of Jodhpur, India. The research focuses on two pilot projects executed in the old city with traditional performing arts of Kathputli and Kaavad. It is inspired by the collaborative, dynamic, sensory and affective nature of traditional art practices. The pilot projects also investigate the creation, performance and circulation of the two traditional arts, creating interventions to move past conventions. As a result, the project offers new opportunities and platforms of performance to families of traditional artists with the aim to create a new audience-base in the students.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Increasing Mars 2020 Mission Efficiency via SAPP Operations Automation and AEGIS V&amp;V</title>
<link href="https://hdl.handle.net/1721.1/154015" rel="alternate"/>
<author>
<name>Trautman, Leilani</name>
</author>
<id>https://hdl.handle.net/1721.1/154015</id>
<updated>2024-04-03T03:35:26Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Increasing Mars 2020 Mission Efficiency via SAPP Operations Automation and AEGIS V&amp;V
Trautman, Leilani
Operating a rover on another planet is a difficult task. While rovers are becoming increasingly autonomous, human input is still required and valuable in the space operations process. However, human time and rover time are precious and efforts must be made to make missions as efficient as possible. This thesis addresses the need for mission efficiency by implementing operational improvements to the Mars 2020 Perseverance rover’s Surface Attitude Positioning and Pointing (SAPP) subsystem and by supporting the verification and validation (V&amp;V) of the Automated Exploration for Gathering Increased Science (AEGIS) software system for autonomous science gathering. These two projects help human operators to assess the rover’s health and status more effectively and help free up time spent with a human in the loop for science operations, respectively.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inferences on the Influences of Age &amp; Porosity on Oxidative Weathering of Massive Sulfides at the Endeavour Segment of Juan de Fuca Ridge</title>
<link href="https://hdl.handle.net/1721.1/154014" rel="alternate"/>
<author>
<name>Herrera, Erica Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/154014</id>
<updated>2024-04-03T03:01:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Inferences on the Influences of Age &amp; Porosity on Oxidative Weathering of Massive Sulfides at the Endeavour Segment of Juan de Fuca Ridge
Herrera, Erica Lauren
Hydrothermal activity at mid-ocean ridge spreading centers occurs during the formation of new oceanic crust and is responsible for the accumulation of mineral deposits comprised mainly of inorganic metal sulfides that precipitate from mixtures of seawater and high-temperature, sulfide-rich, oxygen-poor vent fluid. These mineral aggregates are known as seafloor massive sulfide deposits and occupy unique biogeochemical niches that remain largely unexplored. Upon the cessation of hydrothermal activity, massive sulfide deposits undergo alteration via both biotically- and abiotically-mediated geochemical reactions. These processes are collectively described as oxidative weathering. While the observed textures of these deposits suggest significant variation in weathering rates, neither the causes of this variation nor the drivers that govern biogeochemical oxidation of massive sulfides are well-characterized. To begin to describe the mechanisms that dictate these processes, massive sulfide samples were collected from deposits along the Endeavour Segment of the Juan de Fuca Ridge. Coupled synchrotron-based X-ray Absorption Near Edge Spectroscopy (XANES) and X-Ray Fluorescence (XRF) microscopy were utilized to create comprehensive redox maps that allow for characterization of the localized redox environment and identification of weathering products. These techniques are a powerful and so far underutilized tool with which to examine the geochemical landscapes of seafloor massive sulfide deposits. Mineral identifications and spatial distributions were corroborated with optical microscopy and X-Ray Diffraction (XRD). The Juan de Fuca Ridge massive sulfide samples are composed of iron-sulfide phases, primarily pyrite (FeS₂), with minor amounts of other metal-bearing sulfides, such as sphalerite ((Zn,Fe)S₂) , wurtzite ((Zn,Fe)S₂), and cubanite (CuFe₂S₃). The samples contain rinds comprised of oxides and (primarily iron-bearing) clays that occur along massive sulfide exteriors and within pore channels. Greater amounts of secondary oxides and clays are observed concurrent with increased porosity and internal pore distribution and are inferred to be products of weathering. This study contributes to current understanding of the mineralogy and composition of seafloor massive sulfide deposits and provides new insight into relationships between age, porosity, and oxidative weathering.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding the Relevance, Efficiency and Efficacy of Timed Coding Assessments in the Software Engineering Industry</title>
<link href="https://hdl.handle.net/1721.1/154013" rel="alternate"/>
<author>
<name>Leon Alarcon, Paola A.</name>
</author>
<id>https://hdl.handle.net/1721.1/154013</id>
<updated>2024-04-03T03:31:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Understanding the Relevance, Efficiency and Efficacy of Timed Coding Assessments in the Software Engineering Industry
Leon Alarcon, Paola A.
Software engineering has grown as a market in the past decades to become one of the most profitable and widespread globally. Given the demand for software products, software engineers and their skills have also become highly sought-after. Even though there is great demand for software engineers, companies are looking to hire the best possible talent, forcing them to implement assessment mechanisms to evaluate candidates and their technical proficiency. Consequently, interviewing processes for software positions have become highly competitive and rigorous for prospective candidates. The volume of applicants has also forced companies to implement mechanisms that allow for screening candidates at an efficient cost. As a consequence, timed coding assessments and other technical interviewing methods have been raised as an alternative to screening candidates.&#13;
&#13;
A survey was disseminated where participants were asked about their experience with timed coding assessments. Twelve volunteers willing to participate in a semi-structured interviewed were recruited from those surveyed with the goal of understanding their experiences in more depth. It was found that timed coding assessments can be an effective filtering tool to narrow the pool of candidates but did not show consistent relevancy with respect to the job duties and responsibilities software engineers might need to carry out if offered a position. Furthermore, preparation for these types of examinations was found to be fundamental for their clearance and further advancement into the interviewing stages, showing that qualification came secondary to preparation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing business school engagement through a gamified experience in non-pedagogical contexts - a human-centered design approach</title>
<link href="https://hdl.handle.net/1721.1/154012" rel="alternate"/>
<author>
<name>Huang, Chen</name>
</author>
<id>https://hdl.handle.net/1721.1/154012</id>
<updated>2024-04-03T03:52:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Enhancing business school engagement through a gamified experience in non-pedagogical contexts - a human-centered design approach
Huang, Chen
Gamification has a significant history of being used as an innovative approach in education and learning in recent years. While previous research has established the effectiveness of gamification for learning in management and business school settings, there is limited study on its effectiveness outside of pedagogical contexts. Furthermore, past approaches indicate a lack of focus on human-centered studies of gamification. Drawing from insights gained from a case study involving MBA students and staff at the MIT Sloan School of Management, this paper proposes that gamification, known for its efficacy in pedagogical settings, can also improve engagement and productivity in non-educational learning environments. This potential can be realized by clearly defining the scope of the gamified system and content, delivering well-tailored content to the audience, and considering the accessibility and diversity of the participants.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the use of ChatGPT and the Prompting Framework as a Self-learning Aid for Arduino Coding &amp; Circuit Building for Artists and Designers</title>
<link href="https://hdl.handle.net/1721.1/154011" rel="alternate"/>
<author>
<name>Sagar, Prem</name>
</author>
<id>https://hdl.handle.net/1721.1/154011</id>
<updated>2024-04-03T03:11:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Exploring the use of ChatGPT and the Prompting Framework as a Self-learning Aid for Arduino Coding &amp; Circuit Building for Artists and Designers
Sagar, Prem
The intersection of art, design, and technology is a thriving place for groundbreaking ideas and crossdisciplinary innovation. However, it requires a specific long-term focus on integrating STEM subjects with the formal design education system. With the advent of AI educational tools, particularly ChatGPT, it is now possible to receive personalized learning in STEM subjects. Hence, in an effort to enhance selflearning practices of STEM topics among artists and designers, this research delves into the efficacy of integrating ChatGPT and a structured prompting framework for teaching Arduino coding and circuitry. The study is driven by three pivotal questions: the appropriateness of recommending ChatGPT in academic settings given its potential inaccuracies; the effect of a systematic prompting approach on mastering Arduino skills in a self-taught environment; and the validation of this methodology through comparative baseline and endline assessments. Adopting a mixed-methods research design, the study involved conducting a Randomized Control Trial (RCT) and gathered both qualitative and quantitative data in two phases: an initial baseline to gauge pre-existing knowledge, followed by an endline measurement to evaluate progress. Results reveal that while participants showed overall improvement in technical knowledge, those without the structured prompting framework (control group) surprisingly outperformed their counterparts. This was evident in the higher median scores achieved by the control group in endline assessments. Conclusively, while ChatGPT shows potential as an educational tool for self-learning Arduino coding and circuit building using TinkerCAD, the structured prompting framework's effectiveness remains questionable.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design considerations for an AI-prompted “Future-Self” video journaling tool to enhance self-efficacy</title>
<link href="https://hdl.handle.net/1721.1/154010" rel="alternate"/>
<author>
<name>Torres, Gabriela A.</name>
</author>
<id>https://hdl.handle.net/1721.1/154010</id>
<updated>2024-04-03T03:44:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Design considerations for an AI-prompted “Future-Self” video journaling tool to enhance self-efficacy
Torres, Gabriela A.
This study explores a self-management digital solution designed to empower individuals struggling with emotional self-regulation. With a focus on increasing self-efficacy in specific areas or goals, the study proposes an 'AI-prompted future selfie-video journaling tool' to guide users through the process of recording video selfies with future-self narratives. The study aims to gain insights into a Large Language Model (LLM) that should be fine-tuned based on unique experiences, compare different styles of guided approaches, test metrics for self-efficacy and Future self-continuity feedback, and identify pain points for an efficient design. In a 5-day experiment with participants aged 24-77 from the USA and Peru, insights were gained by playing a simulated WhatsApp AI-assistant chatbot role. Participants were guided to set concrete goals and empowering emotions, then followed the process of recording at night and later replayed the video upon waking up the next day, utilizing the 15-minute window of theta brain waves. Those who completed the task reported gains in self-reflection on emotions, leading to more positive thoughts about daily activities. However, the study identified a key challenge: the necessity for personalized adaptation to ensure the LLM's understanding of both general patterns and the intricacies of individual mental health preferences for effective user engagement and education.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Role of Generative AI tools (GAITs) in Software Development Life Cycle (SDLC)- Waterfall Model</title>
<link href="https://hdl.handle.net/1721.1/154009" rel="alternate"/>
<author>
<name>Prakash, Mridula</name>
</author>
<id>https://hdl.handle.net/1721.1/154009</id>
<updated>2024-04-03T03:39:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Role of Generative AI tools (GAITs) in Software Development Life Cycle (SDLC)- Waterfall Model
Prakash, Mridula
The emergence of generative artificial intelligence tools (GAITs) has garnered considerable attention in recent years. These tools, powered by advanced machine learning algorithms, have the ability to generate new and innovative solutions to complex problems. As a result, organizations across various domains are increasingly seeking to reduce human involvement and rely extensively on AI tools to enhance productivity and effectiveness. The continuous advancement of AI technology has paved the way for its integration into software development, bringing forth an era of unparalleled innovation and efficiency. The amalgamation of AI and software development goes beyond mere task automation; it empowers developers and engineers to reimagine the entire process of conceptualizing, designing, and maintaining software. &#13;
&#13;
As the roles of teams evolve, the AI tools into the Software Development Life Cycle (SDLC) need to tap into the positive benefits of AI. This thesis is motivated by the widespread availability of AI tools, whose adoption and consequent benefits are still not well understood. This thesis targets the evolution of GAITs in crafting each phase of the SDLC. It details the merits, accuracy and utility requirements for engineers. The research questions delineated will anchor the investigation into the targeted areas of only SDLC in waterfall model. The examination of this thesis is centered around assessing GAITs efficiency in formulating meaningful results in each phase of SDLC, scrutinizing the results and probing the impact of generation on software quality and dependability. The research demonstrates the functionality of GAITs analyzing their impact in each phase of SDLC by iterating over systems of various complexities. Further it illustrates the concept of understanding the GAIT tool, drawing insights and usage of GAIT beyond mere automation in software development life cycle.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Can Impact Investors Enable Systems Change? Exploring the Theory and Practice of an Emerging Field</title>
<link href="https://hdl.handle.net/1721.1/154008" rel="alternate"/>
<author>
<name>Yau, Alban (Ray-Pern)</name>
</author>
<id>https://hdl.handle.net/1721.1/154008</id>
<updated>2024-04-03T03:33:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">How Can Impact Investors Enable Systems Change? Exploring the Theory and Practice of an Emerging Field
Yau, Alban (Ray-Pern)
Contemporary challenges, such as climate change and inequality, are complex and systemic. There has been an increasing awareness of “systems change” in the impact investing community, recognizing the limitation of the traditional approach (investing in a single company or technology) to create meaningful impact in entrenched socio-technical systems. However, a big gap between awareness and action still exists, as the concept of “systems change” or “systems thinking” remains too abstract for most impact investors to adopt in their day-to-day operations. The objective of this study is to address this gap by investigating pioneering case studies in an emerging field of investing with explicit consideration of system change. Through comparing multiple cases, developing an in-depth empirical study, and building a simulation model, this thesis sheds some light on the theory and practice of this emerging field. The results highlight how impact investors have great potential to help enable systems change by operationalizing systems theories, building collectives with stakeholders, and developing a strategic portfolio to influence the system dynamics instead of an isolated innovation or intervention.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bringing Computational Modeling into the Classroom with Custom Block-Based Programming Languages in StarLogo Nova</title>
<link href="https://hdl.handle.net/1721.1/154007" rel="alternate"/>
<author>
<name>Greybosh, Colin</name>
</author>
<id>https://hdl.handle.net/1721.1/154007</id>
<updated>2024-04-03T03:30:07Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bringing Computational Modeling into the Classroom with Custom Block-Based Programming Languages in StarLogo Nova
Greybosh, Colin
It is possible to improve equity and accessibility in computer science education by incorporating computational thinking into science classrooms through agent-based computational modeling activities that use custom, task-specific programming languages in StarLogo Nova. However, StarLogo Nova’s block-based programming environment does not support extending the language with new task-specific blocks. This thesis resolves this issue by enabling programmers to add new custom blocks to a StarLogo Nova project. The technical contributions of this work resulted in a custom block system that allows StarLogo Nova programmers to create, edit, and view custom blocks, organize them into customizable drawers, and use them to build their models. It is now possible to create task-specific programming languages within StarLogo Nova for the purpose of making computational thinking concepts, such as abstraction, more approachable to learners with minimal programming experience. The conceptual contributions of this work resulted in a new design for a custom drawer interface and a system for sharing task-specific languages across StarLogo Nova projects. The original goal of the thesis was achieved, as custom blocks are now able to be used within StarLogo Nova as a new mode of abstraction within the language. Furthermore, directions for future work to increase the utility of custom blocks as a learning tool were identified and considered. As a result, custom blocks will enable curriculum designers working in the DC-Models project to create customized modeling projects for high school science learners.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies of electron heat conduction in magnetized, shock-driven implosions at OMEGA</title>
<link href="https://hdl.handle.net/1721.1/154001" rel="alternate"/>
<author>
<name>Chang, Cody</name>
</author>
<id>https://hdl.handle.net/1721.1/154001</id>
<updated>2024-04-03T03:47:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Studies of electron heat conduction in magnetized, shock-driven implosions at OMEGA
Chang, Cody
Imposing an external magnetic field strong enough to magnetize the plasma is currently being researched as an advanced approach to inertial confinement fusion (ICF). The electron heat conduction, one of the primary hot spot energy losses, is suppressed perpendicular to the external magnetic field, which is expected to increase yields and temperatures in collisional plasmas. However, since the particle and energy transport are restricted in one out of three dimensions in the plasma, it will naturally tend to develop a mode-2 asymmetry. This has been observed in experiments and in simulations, which display a volume increase of the hot spot that decreases its pressure and, thus, yield, one of the most important parameters in an ICF experiment.&#13;
&#13;
This thesis discusses an experiment performed at the OMEGA laser facility to further explore the impact of a 25 and 50 T seed magnetic field on the performance of a shock-driven, direct-drive ICF implosion with an asymmetric drive. In these experiments, measurements of the hot spot asymmetry at bang time indicate a 4.75x P2/P0 enhancement when magnetized with a 50 T initial field. Time-resolved measurements of the shell trajectory, however, indicated no asymmetry enhancement, indicating that the shell plasma is not sufficiently magnetized for the magnetic field to have an effect. Gorgon simulations, however, predicted an enhanced asymmetry due to magnetization of the shell and, thus, suppressed electron thermal conductivity. Additionally, the observed hot spot electron temperature was enhanced by 1.6x for both 25 and 50 T magnetic field strengths relative to the unmagnetized temperature. Simulations also predicted an enhanced electron temperature, but also expected the 50 T case to be higher than the 25 T case, with the 50 T case expected to have a 57% electron temperature enhancement and the 25 T enhanced by 38%. The factor by which experimental magnetized electron temperatures at 25 T and 50 T increased by agreed more with the simulated 50 T. This could indicate that the simulations are underpredicting how magnetized the 25 T hot spot becomes, which is difficult to assess since magnetization cannot be directly measured experimentally. Additionally, a lower DD ion temperature was observed with a magnetic field, with the 50 T having on average a 0.8 keV and 25 T a 0.9 keV decrease compared to the 0 T shots.&#13;
&#13;
Finally, the loss of nuclear yield is discussed. A 28% increase in volume of the magnetized hot spot was observed. This implies a loss of density, an important quantity in determining a plasma’s yield. No change in the inferred pressure was measured, however.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two Studies of Constraints in High Dimensions: Entropy Inequalities and the Randomized Symmetric Binary Perceptron</title>
<link href="https://hdl.handle.net/1721.1/153999" rel="alternate"/>
<author>
<name>Wakhare, Tanay</name>
</author>
<id>https://hdl.handle.net/1721.1/153999</id>
<updated>2024-04-03T03:35:20Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Two Studies of Constraints in High Dimensions: Entropy Inequalities and the Randomized Symmetric Binary Perceptron
Wakhare, Tanay
We study two constrained problems in high dimensions. We study a high dimensional inequality for the binary entropy. The perceptron is a natural model in high-dimensional probability, and a toy shallow neural network which stores random patterns; we also study a randomized variant of the symmetric binary perceptron. &#13;
&#13;
We first consider the (k + 1)-th derivative of xᵏ⁻ʳH(xʳ), where H(x) := −x log x − (1 − x) log (1 − x),0 ≤ x ≤ 1 is the binary entropy and k ≥ r ≥ 1 are integers. Our motivation is the conjectural entropy inequality αₖH(xᵏ) ≥ xᵏ⁻¹H(x), where 0 &lt; αₖ &lt; 1 is given by a functional equation. The k = 2 case was the key technical tool driving recent breakthroughs on the union-closed sets conjecture, and the k → ∞ case can be considered the "high dimensional limit". We express ((dᵏ⁺¹) / (dxᵏ⁺¹)) xᵏ⁻ʳH(xʳ) as a rational function, an infinite series, and a sum over generalized Stirling numbers. This allows us to reduce the proof of the entropy inequality for real k to showing that an associated polynomial has only two real roots in the interval (0,1). This reduction allows us to easily verify the inequality for fixed k such as k = 2,3,4 with a finite calculation, and also allows us to prove the inequality for any fixed fractional exponent such as k = 3/2 via a finite calculation. The proof suggests a new framework for proving tight inequalities for the sum of polynomials times the logarithms of polynomials, which converts the inequality into a statement about the real roots of a simpler associated polynomial.&#13;
&#13;
The symmetric binary perceptron (SBP) is a random constraint satisfaction problem (CSP) and a single-layer neural network; it exhibits intriguing features, most notably a sharp phase transition regarding the existence of its satisfying solutions. Secondly, we propose two novel generalizations of the SBP by incorporating random labels. Our proposals admit a natural machine learning interpretation: any satisfying solution to the random CSP is a minimizer of a certain empirical risk. We establish that the expected number of solutions for both models undergoes a sharp phase transition and calculate the location of this transition, which corresponds to the annealed capacity in statistical physics. We then establish, through the Berry-Esseen theorem, a universality result: the location of this transition does not depend on the underlying distribution. We conjecture that both models in fact exhibit an even stronger phase transition akin to the SBP and give rigorous evidence towards this conjecture through the second moment method. Our final focus is on the algorithmic problem of efficiently finding a satisfying solution to our models. We show that both models exhibit the multi Overlap Gap Property (m-OGP), an intricate geometrical property of the solution space which is known to be a rigorous barrier against large classes of algorithms. This gives rigorous evidence of a statistical-to-computational gap for both models. We also show that the m-OGP satisfies a similar universality property.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine-Learning based Ship Traffic Prediction in the Suez Canal</title>
<link href="https://hdl.handle.net/1721.1/153995" rel="alternate"/>
<author>
<name>Budiman, Jeremiah</name>
</author>
<id>https://hdl.handle.net/1721.1/153995</id>
<updated>2024-04-03T04:06:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Machine-Learning based Ship Traffic Prediction in the Suez Canal
Budiman, Jeremiah
This study implements and evaluates two approaches for predicting the average annual daily ship traffic (AADT) of ships within the Suez Canal with a focus on evaluating how deep-learning techniques can be leveraged for both approaches. The first approach is a novel method that utilizes both satellite imagery and AIS technology to predict the AADT. In order to do so, a 2-stage model is implemented that combines an image detection model followed by a correction factor model. The image detection model employs Mask R-CNN, a deep-learning neural network, and the correction factor model utilizes Long Short-Term Memory (LSTM), a recurrent neural network, to train on historical AIS data. Results of the 2-stage model using LSTM demonstrate positive indication of technical feasibility for the approach due to ground-truth AADT values falling within the interquartile range of predictions for all validation sets. Furthermore, although the interquartile ranges have considerable variation, the 2-stage model with LSTM had a mean absolute percentage error (MAPE) of 13.2% based on its median AADT predictions; this is a successful outcome especially when considering the high variance of vessel traffic and the noisiness that comes with satellite imagery’s small sampling rate as just snapshot moments in time. In addition to the 2-stage model, this study also implements a second approach involving a discrete-event simulation (DES) to estimate AADT, and we evaluate how the DES can benefit from using deep-learning techniques like LSTM. Results from the DES model with LSTM indicate an improved 90.8% reduction on the interquartile range of AADT predictions in comparison to that of the 2-stage model. Additionally, the DES model with LSTM had an MAPE of 3.8% for its median AADT predictions, demonstrating strong predictive accuracy. Overall, patterns within the AIS data indicate that despite the effects of Covid-19 in 2020, there is an increase in traffic in subsequent years especially in 2022 due to a rebound effect.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven Space Economy Investment Strategy Through an Updated Commercial Space Technology Roadmap (CSTR)</title>
<link href="https://hdl.handle.net/1721.1/153992" rel="alternate"/>
<author>
<name>Miller, Duncan M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153992</id>
<updated>2024-04-03T03:19:15Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Data-driven Space Economy Investment Strategy Through an Updated Commercial Space Technology Roadmap (CSTR)
Miller, Duncan M.
The innovation and growth of the space industry over the past two decades has led experts to refer to this period as the "Second Space Race"[1]. The space market is projected to reach $1 trillion in 2030, up from just $280 billion in 2010[2]. During this growth, the landscape of the space sector has shifted from a market dominated by state-run agencies to a booming commercial enterprise that offers seemingly endless possibilities and applications. In response, private investors have flooded the industry with capital. Private funding increased from $1 billion in 2010 to over $12 billion in 2022[2]. The development of new forms of contracting and public-private partnerships spurred commercial investments and opened the door to companies other than the "traditional primes." This foundational change in the space business has forced the US government and commercial enterprises to reevaluate strategies for profitability and continued economic growth in the space domain. &#13;
&#13;
This paper holistically characterizes and evaluates the space industry from a two-pronged approach. First, the Commercial Space Technology Roadmap [3], developed by Prof. de Weck in 2018, is updated to reflect the technological advancements in the increasingly fastpaced industry. Second, additional research was conducted to identify and evaluate financial investment from the government and commercial players. This work hopes to inform strategies and prioritization methods that will maximize not only the success of technological investments, but also the return on financial investment throughout the space industry.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Production of Affordable Desktop Fiber Extrusion Devices (FrED) for Educational Purposes</title>
<link href="https://hdl.handle.net/1721.1/153991" rel="alternate"/>
<author>
<name>Xu, Wenhao</name>
</author>
<id>https://hdl.handle.net/1721.1/153991</id>
<updated>2024-04-03T03:30:54Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Development and Production of Affordable Desktop Fiber Extrusion Devices (FrED) for Educational Purposes
Xu, Wenhao
The Fiber Extrusion Device (FrED) is an affordable desktop engineering education tool to aid teaching by creating a laboratory experience. It simulates the continuous fiber draw process, which can provide insights into data acquisition, control systems, smart manufacturing, computer vision, data processing, product design, etc. Designed to be highly modular, the FrED device provides users with a wide range of tunable/adjustable parameters and expansion capacity to enable users to explore beyond the scope of the guided experiment. While successful classroom activities have been conducted using FrED, the 2022 FrED device is still too expensive and heavy to ship to learners worldwide.&#13;
&#13;
This Thesis focuses on product design and development of FrED, making enhancements to reduce cost and mass while increasing the capability. Specifically, the diameter measurement system’s performance was drastically improved by increasing cooling, enhancing fiber stability, introducing a modular pulley system and adjustable tension system, etc. The processor has also been changed from Teensy 4.1 to Raspberry Pi Model 4B to increase the capability to process images during fiber production and to allow users to code using Python rather than C++. To accommodate these changes, the other subsystems are also adjusted for better integration, such as a redesigned PCB. &#13;
&#13;
There are also efforts to improve user safety by following the hierarchy of control to implement visual warnings, physical barriers from hot surfaces, and a thermal switch as an engineering control to prevent thermal runaway of the heater. User experience is enhanced with the virtual monitor connectivity and reduced noise.&#13;
&#13;
Parts are also redesigned to be more compact and use less materials, reducing cost and mass. The final FrED design accomplished a 44.6% reduction in cost over the previous generation of FrED at $149.68 and a 19.7% weight reduction (1.81kg). An initial prototype of the packaging weighs 2.85kg, including the FrED device. The draft assembly manual has been done to pave the way for ramp up.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for Semantic Textual Similarity Integration with Requirements and System Models</title>
<link href="https://hdl.handle.net/1721.1/153990" rel="alternate"/>
<author>
<name>Beilstein, John R.</name>
</author>
<id>https://hdl.handle.net/1721.1/153990</id>
<updated>2024-04-03T04:02:51Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Framework for Semantic Textual Similarity Integration with Requirements and System Models
Beilstein, John R.
Modern engineering projects can involve highly complicated systems with hundreds or even thousands of requirements. Organizing and managing these requirements is a task that falls on Systems Engineers (SEs) and Requirements Engineers (REs). This thesis seeks to better understand how Natural Language Processing can assist SEs and REs by identifying relationships and interactions between requirements. This thesis presents an algorithm that analyzes a requirements dataset and assigns requirements to various components defined in a system model. This system model represents an early concept design and consists of high-level components and the connections or relationships between these components. Components are defined with attributes such as names, descriptions, and synonyms. The algorithm uses semantic textual similarity to identify similarities between requirements and these component attributes to estimate which components of a system are affected by which requirement(s). The algorithm attempts to identify direct relationships between individual requirement statements using STS. Additionally, the algorithm attempts to identify indirect relationships between requirements by identifying requirements with overlapping influences on system model components. &#13;
&#13;
The initial results are promising, with the algorithm able to identify requirement-to-requirement pairings with high semantic textual similarity scores and can also identify multiple requirement statements that have high semantic textual similarity scores with overlapping parts of the system model. This information could be used to allow REs and SEs to better understand how different requirement statements directly or indirectly relate to and influence one another. This framework acts as an early proof of concept and more research is needed to understand its scalability. While not optimized, the proposed algorithm is able to reach F1 scores of 0.59 for matching requirements to individual components of the system model. While these F1 scores are not ideal, they imply this technique could be further refined to yield better results. It’s also worth noting that some of the matches between requirements and the system model would likely not be possible to categorize without a human’s intuition and engineering judgment, thus providing very challenging classifications for the algorithm. The algorithm achieves an overall precision between 0.94 and 1.00 for matching requirements to individual components of the system model at semantic textual similarity thresholds at or above 0.40.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Organizational Network Analysis of the Sprawling U.S. Department of Defense Innovation Ecosystem</title>
<link href="https://hdl.handle.net/1721.1/153989" rel="alternate"/>
<author>
<name>Case, Michael C.</name>
</author>
<id>https://hdl.handle.net/1721.1/153989</id>
<updated>2024-04-03T03:23:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">An Organizational Network Analysis of the Sprawling U.S. Department of Defense Innovation Ecosystem
Case, Michael C.
The 2022 United States National Defense Strategy (NDS) highlights that the greatest strategic challenges for today’s security environment are linked to rapidly changing military capabilities and emerging technologies. It is through innovation that the military’s technological edge is maintained. Defense innovation refers to the broad set of experimental activities aimed at developing and implementing transformational technologies, strategies, and organizational practices to provide enhanced capabilities for the military or to reduce the cost of military operations.&#13;
&#13;
The Department of Defense (DoD) relies on a massive connected network of government agencies, private industry, academia, and research institutions to accomplish these activities. This Defense Innovation Ecosystem grew rapidly over the last decade, but many organizations that comprise the ecosystem today were established independently of one another to address specific needs. This growth led to a massive ecosystem that is not optimally organized to support innovation at the speed required to maintain the military’s technological advantage, especially in light of the rapid commercialization of new technology.&#13;
&#13;
This research develops an organizational network model of the Defense Innovation Ecosystem through a comprehensive review of publicly available data sources. Then, using this model, it conducts an organizational network analysis based on five centrality measures, including degree, weighted degree, eigenvector, betweenness, and closeness. This analysis is then used to update the model visualization. Lastly, a modularity assessment of the network model examines a potential hierarchical realignment that cuts across existing organizational boundaries.&#13;
&#13;
This research aims to better understand the Defense Innovation Ecosystem as it currently exists and then provide one viewpoint on how the DoD might evolve the ecosystem to meet future demands.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The University of the Philippines at Manila</title>
<link href="https://hdl.handle.net/1721.1/153972" rel="alternate"/>
<author>
<name>Concio, César Homero.</name>
</author>
<id>https://hdl.handle.net/1721.1/153972</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1940-01-01T00:00:00Z</published>
<summary type="text">The University of the Philippines at Manila
Concio, César Homero.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1940; Includes bibliographical references.
</summary>
<dc:date>1940-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PCM television bandwidth reduction using pseudo-random noise</title>
<link href="https://hdl.handle.net/1721.1/153970" rel="alternate"/>
<author>
<name>Roberts, L. G.
            (Lawrence G.)</name>
</author>
<id>https://hdl.handle.net/1721.1/153970</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1961-01-01T00:00:00Z</published>
<summary type="text">PCM television bandwidth reduction using pseudo-random noise
Roberts, L. G.
            (Lawrence G.)
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1961; Includes bibliographical references (leaf [40]).
</summary>
<dc:date>1961-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sugar cane juice deionization</title>
<link href="https://hdl.handle.net/1721.1/153968" rel="alternate"/>
<author>
<name>Javellana, Angel L.
            (Angel Lacson)</name>
</author>
<id>https://hdl.handle.net/1721.1/153968</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1954-01-01T00:00:00Z</published>
<summary type="text">Sugar cane juice deionization
Javellana, Angel L.
            (Angel Lacson)
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1954; Includes bibliographical references (leaf A10).
</summary>
<dc:date>1954-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A matrix-free linear programming duality theory</title>
<link href="https://hdl.handle.net/1721.1/153964" rel="alternate"/>
<author>
<name>Villela, Paulo Arruda.</name>
</author>
<id>https://hdl.handle.net/1721.1/153964</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1979-01-01T00:00:00Z</published>
<summary type="text">A matrix-free linear programming duality theory
Villela, Paulo Arruda.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mathematics, 1979; Bibliography: leaf 61.
</summary>
<dc:date>1979-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Utilization of L( - )-glucose by naturally occuring microorganisms</title>
<link href="https://hdl.handle.net/1721.1/153963" rel="alternate"/>
<author>
<name>Fewkes, Robert Charles Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/153963</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Utilization of L( - )-glucose by naturally occuring microorganisms
Fewkes, Robert Charles Joseph.
Carbon recycle by means of physicochemically synthesized carbohydrates has been proposed. These artificial sugars can be used to generate single cell protein. However, it is not known what effects the unnatural components will have on the yield, productivity, and metabolic regulation of the or­ganisms used.  We have obtained from natural populations, a number of organisms which utilize L-glucose as sole carbon source. Of the twelve organisms isolated, five are gram-negative aerobic rods, one is a gram positive coccus, two are thermophilic bacilli, three are yeasts, and one is a mycelial form. Pre­liminary taxonomy was done on these organisms.  When fully adapted to growth on L-glucose, one pseudomonad grows ex­ponentially with a doubling time of 14 to 16 hours with 5 g/L L-glucose in the medium. Cell yields are about 0.46 g dry cells/g L-glucose, and cell densities as high as 2.8 g/L have been acheived in shake flasks. The ap­parent maximum growth rate is 0.0506 hr.⁻¹ and the apparent overall K[subscript m] for growth is 0.14 g/L L-glucose. However, substrate inhibition sets in at about 4.5 g/L L-glucose. L-glucose transport takes place by facilitated diffusion at V[subscript max] = 2.63 x 10⁻³ mg L-glucose/(mg cells-min) and K[subscript m]= 0.65 g/L L-glucose. The organism probably utilizes the entire L-glucose molecule. There  is evidence that carbon 1 is eliminated as CO₂ and subsequently reassimilated from the medium. One or more growth factors appear to be necessary for L­ glucose utilization. They are made by the organism under good growth con­ditions and one appears to be excreted into the medium.  A hypothetical mechanism of L-glucose utilization consistent with the growth kinetics is proposed. This mechanism involves a catabolic sequence with at least two limiting reactions. The first is incipient transport limitation and the second is inhibition by an intracellular metabolite derived from L-glucose.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nutrition and Food Science, 1972; Cataloged from the official PDF version of thesis.; Includes bibliographical references (pages 143-155).
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating Flow-Based Sampling for Large-&#119873; Gauge Theories</title>
<link href="https://hdl.handle.net/1721.1/153909" rel="alternate"/>
<author>
<name>Zhang, Michael S.</name>
</author>
<id>https://hdl.handle.net/1721.1/153909</id>
<updated>2024-03-22T04:14:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Accelerating Flow-Based Sampling for Large-&#119873; Gauge Theories
Zhang, Michael S.
Due to its consistency with numerous experimental observations, the Standard Model of particle physics is widely accepted as the best known formulation of elementary particles and their interactions. However, making experimental predictions using the Standard Model involves mathematical and computational challenges due to its complexity. Quantum chromodynamics (QCD), which can be described as an SU(3) gauge theory due to the 3 quark colors and 8 gluon types, is one sector of the Standard Model for which computing solutions is especially challenging. A natural theoretical generalization of QCD is the class of all SU(&#119873;) gauge theories; these theories also provide a method for some QCD computations in the &#119873; → ∞ limit. To study these theories numerically, approximations are calculated from configuration samples due to the mathematical complexity and lack of analytical solutions.&#13;
&#13;
In this thesis, we explore asymptotically efficient flow-based sampling algorithms for the twisted Eguchi-Kawai (TEK) model, a method for analyzing large-&#119873; QCD numerically. We introduce an original architecture based on SU(2) matrix multiplication that allows for efficient Jacobian computation. In addition, we explore the possibility of transfer learning with respect to the number of colors &#119873; and demonstrate that a model trained quickly on the SU(&#119873;) setting also provides useful information in SU(&#119873;′), &#119873;′ &gt; &#119873; cases.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fully Hyperbolic Graph Convolutional Neural Networks for Age Prediction with Multi-Modal Brain Data</title>
<link href="https://hdl.handle.net/1721.1/153908" rel="alternate"/>
<author>
<name>Ramirez, Hugo</name>
</author>
<id>https://hdl.handle.net/1721.1/153908</id>
<updated>2024-03-22T03:15:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fully Hyperbolic Graph Convolutional Neural Networks for Age Prediction with Multi-Modal Brain Data
Ramirez, Hugo
Characterizing age-related alterations in MEG brain networks holds great promise in understanding aging trajectories and revealing aberrant patterns of neurodegenerative disorders, such as Alzheimer’s disease. In this study, we utilize a Fully Hyperbolic Neural Network (FHNN) to embed functional brain connectivity graphs, derived from magnetoencephalography (MEG) data, into low dimensions on a Lorentz Hyperboloid model for hyperbolic space. Using these embeddings, we aim to detect changes in the intrinsic hierarchy of functional subnetworks across time as well as predict age for patients across multiple decades. We use the hyperbolic embedding pipeline in tandem with multimodal MEG and MRI data to create embeddings from the Cam-CAN (Cambridge Centre for Ageing and Neuroscience) dataset for the downstream task of brain age prediction in healthy patients to better understand how brain connectivity structure impacts brain aging trends. Our hyperbolic MEG brain network embedding framework effectively transforms high-dimensional complex MEG brain networks into lower-dimensional hyperbolic representations, facilitating structural brain hierarchy analysis across age, as well as age prediction. Our versatile embedding pipeline allows for the ready implementation of other downstream tasks like clustering and classification. This constitutes a novel way of studying connectivity alterations in brain networks.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>DalSegno: User-centric preference elicitation strategies for mitigating cold start in music recommender systems</title>
<link href="https://hdl.handle.net/1721.1/153907" rel="alternate"/>
<author>
<name>Lin, Cynthia</name>
</author>
<id>https://hdl.handle.net/1721.1/153907</id>
<updated>2024-03-22T03:56:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">DalSegno: User-centric preference elicitation strategies for mitigating cold start in music recommender systems
Lin, Cynthia
Avid music enthusiasts often rely on music recommender systems to sift through expansive music catalogs and find new songs fitting their interests. However, such systems struggle to personalize suggestions for new users as they heavily rely on extensive listening histories to make accurate suggestions — an issue known as the new user cold start problem. This problem is exacerbated by the fact that most commercial recommender systems lack transparency and avenues for users to influence their recommendations.&#13;
&#13;
We thus propose DalSegno, a music recommender system with an interactive web-based user interface. The platform is designed to overcome the new user cold start problem by iteratively presenting users with recommendations and incorporating elicited feedback. Additionally, DalSegno enables users to learn about and fine-tune their inferred music preferences through interactive visualizations of song characteristics.&#13;
&#13;
Throughout three rounds of user testing, DalSegno demonstrated promising results. Participants appreciated the system's ability to incorporate user feedback to provide more relevant recommendations and considered it more intuitive to use than commercial recommendation systems. Additionally, users felt that the interactive visualizations of musical qualities helped them learn more about their personal music tastes, which encouraged them to further utilize the interface. Overall, positive evaluations of DalSegno demonstrate that incorporating user input and fostering explainability is vital to creating a more user-focused and effective music discovery experience.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Private Random Variate Sampling for Secure and Federated Polygenic Risk Scores</title>
<link href="https://hdl.handle.net/1721.1/153906" rel="alternate"/>
<author>
<name>Yen, Derek Jia-Wen</name>
</author>
<id>https://hdl.handle.net/1721.1/153906</id>
<updated>2024-03-22T03:33:31Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Private Random Variate Sampling for Secure and Federated Polygenic Risk Scores
Yen, Derek Jia-Wen
Polygenic risk scores (PRS) are used to quantify the additive effect of single nucleotide polymorphisms (SNPs) on an individual’s genetic risk for developing a particular trait or condition. Collaborations between data centers are important for improving the statistical power and validity of PRS through larger, more genetically diverse datasets. However, owing to the privacy concerns inherent in genomic data, regulations restrict institutions’ capacity to share data. Using cryptography, we present a secure and federated implementation of a Monte Carlo algorithm for PRS, enabling collaborations that respect data regulations. To implement a Monte Carlo algorithm in a privacy-preserving context, our work exhibits techniques for sampling random variates with cryptographically private parameters, which may be of independent interest.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-Based Compact Model Development of Field Emitter Arrays</title>
<link href="https://hdl.handle.net/1721.1/153903" rel="alternate"/>
<author>
<name>Shin, Youngjin</name>
</author>
<id>https://hdl.handle.net/1721.1/153903</id>
<updated>2024-03-22T03:04:19Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Physics-Based Compact Model Development of Field Emitter Arrays
Shin, Youngjin
Field Emitter Arrays (FEAs) - based cold cathodes have shown promise as electron sources in devices capable of high power and high frequency operation for a variety of applications such as microwave power amplifiers, pressure sensors, x-ray sources and high-power excimer lasers. Limited work has been done exploring the device characteristics using well defined cathode to anode separation. Consequently, FEAs lack a physics-based compact model. In this work, a flat stand-off anode was placed on the FEAs, which guarantees the anode-to-emitter distance and the parallel condition. The I-V characteristics in the space charge limit show an unexplained, yet reproducible Negative Differential Resistance (NDR) region resulting in a double saturation behavior. Upon further analysis of the electrostatics, the parallel-plate configuration was found to introduce a 2-dimensional acceleration channel affecting electron transport in the space between the anode and the gate electrode. Using simulation results of the electric field and potential distributions, the collection velocities of the emitted electrons were calculated, revealing that the current collection at the anode is electrostatically limited due to the deceleration of the electrons in the vacuum channel when the anode bias is below the gate bias. Although the physical mechanisms behind the NDR region are not fully understood, a qualitative conjecture with relation to the electrostatics is provided. The modeling approach approximates the current density distribution to a Gaussian distribution. The error function is used to calculate the integral of the Gaussian distribution, representing the normalized current density of the output characteristics. The error function is parametrized to predict the experimental I-V behavior in the separate operating regimes of the device. The resulting FEA model contains 22 fitting parameters, and the model function is only dependent on 4 physical parameters consisting of the anode and gate biases, anode separation distance, and the anode material work function. The resulting model shows more than reasonable accuracy within typical operation ranges of FEAs and captures the trends as observed in the experimental data. The compact model also includes the behavior of the NDR regime, opening new avenues of applications for FEAs including oscillators and frequency dividers.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Managing Financial Risks for Wind Power Producers in Wholesale Electricity Markets</title>
<link href="https://hdl.handle.net/1721.1/153902" rel="alternate"/>
<author>
<name>Shen, Daniel Weihang</name>
</author>
<id>https://hdl.handle.net/1721.1/153902</id>
<updated>2024-03-22T03:03:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Managing Financial Risks for Wind Power Producers in Wholesale Electricity Markets
Shen, Daniel Weihang
Wind power plant operators are exposed to financial risk in wholesale electricity markets due to the uncertain nature of wind forecasts, day-ahead electricity prices, and real-time electricity prices. In the event of a shortfall compared to the production forecast, the wind generator may have to repurchase power at a higher price in the real-time market. Based on this consideration, this thesis formulates a mixed-integer quadratic program which uses conditional value at risk to create a hedged “risk-aware” offer curve for the wind generator to submit into the day-ahead electricity market. The formulated program additionally considers specific concerns around the offer optimization process being negatively interpreted as using physical withholding to increase profits. We also exploit the structure of the problem to introduce additional constraints to improve computation time and demonstrate that despite the complexity of mixed-integer variables it can be solved within an acceptable operating timeframe under realistic conditions. We simulate the impacts on financial returns for the generator of applying such an approach for a wind farm in the New York City region; the program can be successfully tuned to adjust the variability in returns based on the agent’s preferences, but does not outperform a more naive strategy of simply cutting off the quantity based on a percentile of the forecast distribution. Finally, we provide some discussion on how the act of “active” price creation through these risk-aware offer curves could come into conflict with the current regulatory environment, especially around the concept of exercise of market power, which has long relied on tying fair prices to ones that represent marginal fuel costs for generators.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equivariant symmetry breaking sets</title>
<link href="https://hdl.handle.net/1721.1/153901" rel="alternate"/>
<author>
<name>Xie, YuQing</name>
</author>
<id>https://hdl.handle.net/1721.1/153901</id>
<updated>2024-03-22T03:13:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Equivariant symmetry breaking sets
Xie, YuQing
Equivariant neural networks (ENNs) have been shown to be extremely useful in many applications involving some underlying symmetries. However, equivariant networks are unable to produce lower symmetry outputs given a high symmetry input. Spontaneous symmetry breaking occurs in many physical systems where we have a less symmetric stable state from an initial highly symmetric one. Hence, it is imperative that we understand how to systematically break symmetry for equivariant neural networks. In this work, we propose the first symmetry breaking framework that is fully equivariant. Our approach is general and applicable to equivariance under any group. To achieve this, we introduce the idea of symmetry breaking sets (SBS). Rather than redesign existing networks to output symmetrically degenerate sets, we design sets of symmetry breaking objects which we feed into our network based on the symmetry of our input. We show there is a natural way to define equivariance on these sets which gives an additional constraint. Minimizing the size of these sets equates to data efficiency. We show that bounding the size of these sets translates to the well studied group theory problem of finding complements of normal subgroups. We tabulate solutions to this problem for the point groups. Finally, we provide some examples of symmetry breaking to demonstrate how our approach works in practice.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Sharing and Traceability: Improving User Trust in Data Management within Open Banking and Beyond</title>
<link href="https://hdl.handle.net/1721.1/153900" rel="alternate"/>
<author>
<name>Magendanz, Quinn</name>
</author>
<id>https://hdl.handle.net/1721.1/153900</id>
<updated>2024-03-22T03:13:21Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Data Sharing and Traceability: Improving User Trust in Data Management within Open Banking and Beyond
Magendanz, Quinn
This paper identifies the declining trust in proper data handling throughout the past decades, reviews studies into User Trust, and explores existing frameworks that have been developed to secure, streamline, and make accessible the processes of receiving authenticated User consent, sharing User data, and expressing data usage and collection preferences. Together, these realizations illustrate the customer need, market understanding, and optimum mode of integration which will demand and enable the development of the OTrace Traceability Protocol. This protocol allows a User to track the sharing and usage of their personal data after it has been provided to, or collected by, an initial Data Provider that has explicitly received User consent. For the purpose of monitoring and auditing, the Data Provider and Data Recipient submit records to a Traceability Server to record initial User consent for data sharing as well as ensuing sharing and usage of the User's data. This specification introduces new standards for recording data sharing and usage as Traceability Records into a consent framework which builds off elements of the OAuth 2.0, PAR, PKCE, JWT, JWS, and TB protocols as well as the FAPI and FDX standards for financial data sharing.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>MLVR: Regular Expression-Based Specification for Verified Model Checking of Hardware</title>
<link href="https://hdl.handle.net/1721.1/153899" rel="alternate"/>
<author>
<name>Kammer, Gabriel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153899</id>
<updated>2024-03-22T04:08:06Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">MLVR: Regular Expression-Based Specification for Verified Model Checking of Hardware
Kammer, Gabriel A.
Model checking is an approach to verification of finite-state systems which relies on iterating through all possible states and checking whether some condition holds at each state. One challenge with this approach is that in the majority of real-world systems, the number of states to traverse is too large to feasibly fully explore. In this thesis, we present MLVR (Multi-Layer Variable Regexp), a specification language designed for model checking against hardware system implementations. The syntax of MLVR is based on regular expressions, where we specify what traces of inputs and outputs from the system are acceptable. We offer support for variables to be remembered and later recalled, and we allow for treating the values of variables symbolically during model checking. This allows the state space of systems primarily dealing with variable input/output (for example, hardware buses) to be reduced enough that model checking is feasible for formal verification of the system. We provide a simplified language, SLVR (Single-Layer Variable Regexp), with some of the core features of MLVR and formal proofs of correctness for model checking with SLVR, implemented in the Coq proof assistant. The style and structure of the proofs about SLVR provide insight into how proofs of correctness of MLVR might be written, and they demonstrate solutions to some of the technical challenges raised in proving correctness of MLVR.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detecting Multimodal Behaviors for Neurodegenerative Disease</title>
<link href="https://hdl.handle.net/1721.1/153898" rel="alternate"/>
<author>
<name>Berrones, Antonio</name>
</author>
<id>https://hdl.handle.net/1721.1/153898</id>
<updated>2024-03-22T03:36:18Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Detecting Multimodal Behaviors for Neurodegenerative Disease
Berrones, Antonio
Neurodegenerative diseases such as Parkinson’s and Alzheimer’s are incurable and affect millions of people worldwide. Early diagnosis is critical for improving quality of life for patients. Current methods rely on the use of tests administered and evaluated by clinicians. The digital Symbol Digit Test (dSDT) is a novel cognitive test that aims to distinguish between individuals with normal and impaired cognitive abilities. This thesis will develop a framework for processing collected participant eye-tracking and handwriting data and show its use in detecting specific multimodal learning behaviors. Furthermore, this thesis will explore recommendations for working with eye-tracking systems and outline future steps towards developing a multimodal classification model to automate early diagnosis of neurodegenerative disease.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Load balancing and memory optimizations for expert parallel training of large language models</title>
<link href="https://hdl.handle.net/1721.1/153897" rel="alternate"/>
<author>
<name>Wisdom, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/153897</id>
<updated>2024-03-22T03:32:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Load balancing and memory optimizations for expert parallel training of large language models
Wisdom, Daniel
Large language models (LLMs) are an effective way to solve many text-based machine learning tasks, but require huge amounts of computation to train and evaluate. Mixture of experts models have emerged as a way to reduce the amount of computation required for LLMs without compromising accuracy. It is necessary to distribute these large models across several devices, but this requires substantial communication between devices throughout training. Expert parallel is a promising approach to distributing the model across devices and communicating necessary information during training, especially for small batch sizes or models with large embedding sizes. Unfortunately, expert parallel creates an imbalanced workload across devices, causes errors with existing memory conservation strategies, and has poor overlapping of communication and computation. Some existing works solve the imbalanced workload by dropping excess tokens sent to experts above a capacity, but that may reduce accuracy.&#13;
&#13;
In my thesis I introduce ModuleFormer-PRM, an expert parallel training system that addresses these issues without dropping tokens. I will explain a subtle error that occurs when trying to save memory and a strategy to prevent it. I will analyze the distribution of workload among experts and show two approaches to better balance the workload across devices, leading to more stable memory use and faster runtime. I evaluate ModuleFormerPRM using pretrained MoE models and show my optimizations improved expert parallel’s throughput by 2.1×.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ExoSpotter: Few Shot Relevance Feedback For Learning High Recall Exoplanet Search</title>
<link href="https://hdl.handle.net/1721.1/153896" rel="alternate"/>
<author>
<name>Živanović, Goran</name>
</author>
<id>https://hdl.handle.net/1721.1/153896</id>
<updated>2024-03-22T03:49:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">ExoSpotter: Few Shot Relevance Feedback For Learning High Recall Exoplanet Search
Živanović, Goran
Transit photometry is a widely used method for searching exoplanets. For example, NASA’s Transiting Exoplanet Survey Satellite (TESS) utilizes this technique. However, identifying exoplanet candidates requires significant human effort to process light curves; a workflow with minimal human input is desirable. Unfortunately, very few labeled training data are available (i.e., light curves labeled as planet candidates), which makes automatic classification di cult. Here, we propose a new approach to identify planet candidates using relevance-feedback accelerated few-shot learning. We generate many labeled synthetic light curves with and without transits by combining a simple physics-based transit injection model with a statistics-based generative model seeded with abundant non-transiting (“noise”) light curve data. After comparing multiple methods, we selected and trained a generic XGBoost classifier offline on only unfolded and diffused synthetic light curves. We adapted it online by feeding back a few observed and misclassified light curves. The result is an exoplanet classifier with the currently best-known recall and precision.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Framework for LLM-based Lifelong Learning in Robot Manipulation</title>
<link href="https://hdl.handle.net/1721.1/153895" rel="alternate"/>
<author>
<name>Mao, Jerry W.</name>
</author>
<id>https://hdl.handle.net/1721.1/153895</id>
<updated>2024-03-22T03:51:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Framework for LLM-based Lifelong Learning in Robot Manipulation
Mao, Jerry W.
While robotic agents have become increasingly adept at low-level manipulation skills, increasingly they are being guided by large language model planners that decompose complex tasks into subgoals. Recent works indicate that these language models may also be effective skill learners. We develop HaLP 2.0, a modular and extensible framework for lifelong learning in human-assisted language planning, using GPT-4 to propose a curriculum of skills that is learned, used, and intelligently reused. Our system is designed for large-scale experiments, is equipped with a user-friendly interface, and is extensible to new skill learning frameworks. We demonstrate extensibility by comparing alternative implementations of our abstractions and improving overall performance by incorporating novel frameworks. Moreover, we conduct a focused study of GPT-4, using crowd-sourced scene and task datasets, finding that language models are capable agents of skill reuse and adaptation. We observe that while performance is dependent on language context, supplying optimized prompts can yield exceptional skill reuse behaviors. We envision that as manipulation primitives and large language models become more powerful, our system will be ready to synthesize their capabilities to create an autonomous system for lifelong learning, that can one day be deployed in the real world.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards a neuro-symbolic approach to moral judgment</title>
<link href="https://hdl.handle.net/1721.1/153894" rel="alternate"/>
<author>
<name>Wing, Shannon P.</name>
</author>
<id>https://hdl.handle.net/1721.1/153894</id>
<updated>2024-03-22T03:53:51Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Towards a neuro-symbolic approach to moral judgment
Wing, Shannon P.
The goal to build a safe Artificial General Intelligence requires an advancement beyond any single human being’s moral capacity. For the same reason why we desire democracy, a moral AGI will need to be able to represent a wide array of perspectives accurately.&#13;
&#13;
While there has been a lot of work to push AI towards correctly answering unanimously agreed upon moral questions, we will take a different approach and ask: What do we do for the space where there is no correct answer, but perhaps multiple? Where there are better and worse arguments? We will investigate one complex moral question, where the empirical human data strays from unanimous agreement, evaluate chatGPT’s success, and build towards a neuro-symbolic framework to improve upon this baseline. By investigating one problem in depth, we hope to uncover nuances, intricacies, and details that might be overlooked in a broader exploration. Our insights intend to spark curiosity, rather than provide answers.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation</title>
<link href="https://hdl.handle.net/1721.1/153893" rel="alternate"/>
<author>
<name>Alkhafaji, Yaseen</name>
</author>
<id>https://hdl.handle.net/1721.1/153893</id>
<updated>2024-03-22T03:02:44Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation
Alkhafaji, Yaseen
Graph Neural Networks (GNNs) are a popular class of machine learning models that allow scientists to leverage machine learning techniques to perform inference on unstructured data. However, when graphs become too large, partitioning becomes necessary to allow for distributed computation. Standard graph partitioning methods for GNNsinclude Random partitioning and the state-of-the-art METIS. Whereas METIS produces partitions of high-quality, its preprocessing overheads make it impractical for extremely large graphs. Conversely, random partitioning is cheap to compute, but results in poor partition quality that causes GNN training to be bottlenecked by communication. In my thesis, I seek to prove that it is possible to reduce the data preprocessing overhead on small machines for large graph datasets used in ML while maintaining partition quality. In support of this goal, I design and implement a hierarchical label-propagation-based graph partitioning system known as PLaTE (Propagating Labels to Train Efficiently), partially based on the paper “How to Partition a Billion Node Graph” [18]. PLaTE runs 5.6x faster than METIS on the Open Graph Benchmark’s papers100M dataset, while consuming 4.9x less memory. PLaTE produces partitions that are equally balanced to METIS with comparable communication volumes under certain conditions. In real GNN training experiments, PLaTE has comparable average epoch times to METIS.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>URDF Studio: Tools for the visualization and verification of Universal Robot Description Format</title>
<link href="https://hdl.handle.net/1721.1/153891" rel="alternate"/>
<author>
<name>Nocito, Marco</name>
</author>
<id>https://hdl.handle.net/1721.1/153891</id>
<updated>2024-03-22T03:09:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">URDF Studio: Tools for the visualization and verification of Universal Robot Description Format
Nocito, Marco
A Unified Robot Description Format (URDF) file is an XML file specification used to model robotic systems. URDF files are difficult to modify and verify due to the complexity of the systems they model. We build a set of tools to aid in the modification and verification of these URDF files. This includes a web-based URDF visualizer as well as a Python URDF linter to check if a given URDF file follows formatting and content requirements. We also collect a dataset of representative URDFs.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Controllable Transformation Matching Networks for Efficient RF Impedance Matching</title>
<link href="https://hdl.handle.net/1721.1/153890" rel="alternate"/>
<author>
<name>Rafa Islam, Khandoker N</name>
</author>
<id>https://hdl.handle.net/1721.1/153890</id>
<updated>2024-03-22T03:47:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Controllable Transformation Matching Networks for Efficient RF Impedance Matching
Rafa Islam, Khandoker N
Efficient and controlled delivery of radio-frequency (rf) power for semiconductor plasma processing typically relies upon tunable matching networks to transform the variable plasma load impedance to a fixed impedance suitable for most rf power amplifiers. Plasma applications require fast tuning speed with precise control from the matching networks while operating at a high frequency range. However, it is difficult to meet the requirements for many semiconductor plasma applications with conventional impedance matching solutions due to their limited response speeds. This slow speed comes from the presence of mechanical components in the matching network, since they can be tuned only mechanically. This work introduces a novel controllable transformation matching network (CTMN) intended to address the need for high-speed, tunable impedance matching.&#13;
&#13;
The design of the CTMN employs a two-port controllable switching network coupled with a high-Q passive network, enabling rapid voltage modulation and dynamic reactance tuning (dynamic frequency tuning) to swiftly accommodate both resistive and reactive load variations. Control strategies are introduced to maintain zero-voltage switching as needed to minimize switching losses. This approach is substantiated through simulations, which indicate the CTMN’s capability to achieve precise impedance matching with the potential for substantially faster response times (in the &#120583; s range) than traditional systems. It is anticipated that the proposed approach will enable ultra-fast, high-efficiency tunable impedance matching to address the needs of modern plasma systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparisons in End-to-End Pipeline Designs for Customized Document Information Extraction</title>
<link href="https://hdl.handle.net/1721.1/153889" rel="alternate"/>
<author>
<name>Kim, Seok Hyeon</name>
</author>
<id>https://hdl.handle.net/1721.1/153889</id>
<updated>2024-03-22T03:01:43Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Comparisons in End-to-End Pipeline Designs for Customized Document Information Extraction
Kim, Seok Hyeon
As businesses continue to adapt to the shift toward the digitalization of corporate tasks, one particular remaining financial and temporal bottleneck is the need for manual labor in interpreting digital documents and recording relevant information. Much work and research has been done, utilizing both machine learning techniques and traditional algorithmic approaches, to alleviate the resources required for this task by developing automated solutions for extracting information from such documents. However, current commercially available solutions typically struggle with either generalization to unique document structures or with handling the range of potential details present within a document type. The thesis introduces and compares two distinct end-to-end pipeline architectures combining neural networks with algorithmic techniques to effectively extract custom key-value information, with one focusing on commercial invoices with consistent keys and the other on technical specification sheets with variable keys. With accuracy, generalizability, and modularity as priorities, their use cases, benefits, and limitations are explored alongside comparisons with existing commercial solutions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Winning at Pokémon Random Battles Using Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/153888" rel="alternate"/>
<author>
<name>Wang, Jett</name>
</author>
<id>https://hdl.handle.net/1721.1/153888</id>
<updated>2024-03-22T03:37:06Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Winning at Pokémon Random Battles Using Reinforcement Learning
Wang, Jett
Pokémon battling is a challenging domain for reinforcement learning techniques, due to the massive state space, stochasticity, and partial observability. We demonstrate an agent which employs a Monte Carlo Tree Search informed by a actor-critic network trained using Proximal Policy Optimization with experience collected through self-play. The agent peaked at rank 8 (1693 Elo) on the official Pokémon Showdown gen4randombattles ladder, which is the best known performance by any non-human agent for this format. This strong showing lays the foundation for superhuman performance in Pokémon and other complex turn-based games of imperfect information, expanding the viability of methods which have historically been used in perfect-information games.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Player Zero-Sum Markov Games with Networked Separable Interactions</title>
<link href="https://hdl.handle.net/1721.1/153885" rel="alternate"/>
<author>
<name>Park, Chanwoo</name>
</author>
<id>https://hdl.handle.net/1721.1/153885</id>
<updated>2024-03-22T03:07:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Multi-Player Zero-Sum Markov Games with Networked Separable Interactions
Park, Chanwoo
We study a new class of Markov games, (multi-player) zero-sum Markov Games with Networked separable interactions (zero-sum NMGs), to model the local interaction structure in non-cooperative multi-agent sequential decision-making. We define a zero-sum NMG as a model where the payoffs of the auxiliary games associated with each state are zero-sum and have some separable (i.e., polymatrix) structure across the neighbors over some interaction network. We first identify the necessary and sufficient conditions under which an MG can be presented as a zero-sum NMG, and show that the set of Markov coarse correlated equilibrium (CCE) collapses to the set of Markov Nash equilibrium (NE) in these games, in that the product of per-state marginalization of the former for all players yields the latter. Furthermore, we show that finding approximate Markov stationary CCE in infinite-horizon discounted zero-sum NMGs is PPAD-hard, unless the underlying network has a “star topology”. Then, we propose fictitious-play-type dynamics, the classical learning dynamics in normal-form games, for zero-sum NMGs, and establish convergence guarantees to Markov stationary NE under a star-shaped network structure. Finally, in light of the hardness result, we focus on computing a Markov non-stationary NE and provide finite-iteration guarantees for a series of value-iteration-based algorithms. We also provide numerical experiments to corroborate our theoretical results.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Why did the prediction change? Explaining changes in predictions as time progresses</title>
<link href="https://hdl.handle.net/1721.1/153884" rel="alternate"/>
<author>
<name>Wang, Wei-En Warren</name>
</author>
<id>https://hdl.handle.net/1721.1/153884</id>
<updated>2024-03-22T03:03:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Why did the prediction change? Explaining changes in predictions as time progresses
Wang, Wei-En Warren
Few works on machine learning (ML) explanations design explanations from the perspective of model deployment in the real-world. This work addresses the challenges of understanding ML models applied to event-based time-series data, concretizes two explanation scenarios, and proposes explanations based on changes in feature values, model predictions, and feature contributions for each deployment scenario. We study the prediction problem of turbine brake pad failures, where predictive time-series ML models were deployed in production. Our solution to help decision makers understand how the predictions are made include the development of a usable ML interface and explanations that are aware of the scenarios and contexts where the models are being used. We discuss the usage of ML explanations and the importance of the context under which the model is deployed. We showed our usable ML interface and the explanations with their corresponding scenarios built on top of the usable ML system, which consists of Pyreal, Sibyl-API, and Sibylapp.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Careful Design: Using multi-modal data and virtual reality to bridge the subjectivity gap in architectural space-making.</title>
<link href="https://hdl.handle.net/1721.1/153883" rel="alternate"/>
<author>
<name>Dojnow, Aleksy</name>
</author>
<id>https://hdl.handle.net/1721.1/153883</id>
<updated>2024-03-22T04:05:56Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Careful Design: Using multi-modal data and virtual reality to bridge the subjectivity gap in architectural space-making.
Dojnow, Aleksy
Architecture is a field that deals with the synthesis of many others. It is not just design and construction, but philosophy, art, technology, culture, user experience and all the intangible aspects of the human psyche. As such, architects, throughout their training and professional life, aim to build an intuitive sense of what makes any given space perform the way it is supposed to when experienced by the beholder. They support their decision-making with heuristics and rules of thumb that have been percolating since the beginning of human construction. This is usually a realm dictated by subjective experience and is, therefore, intrinsically imperfect in the way it reflects the architect’s desire and the user’s experience of the architecture. But does it have to be? &#13;
&#13;
Virtual Reality provides the unique affordance of rapidly testing and adapting virtual environments to the real-time biofeedback, eye-tracking and self-reports of the beholder. Something that brick-and-mortar architecture is unable to achieve at sufficient pace and scale. As a result, VR has the chance of lifting, if even ever so slightly, the veil that separates the objective reality from subjective experience. I want my thesis to attempt just that. I recognize that I may fail to do so entirely. Perhaps the gap between these two worlds is not meant to be bridged. But that shouldn’t be the reason why I shouldn’t try, as I believe that the path, I will take may yield important and unexpected discoveries that, at the very least, may show where not look and perhaps point in the direction we should try to go next.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>NLP City</title>
<link href="https://hdl.handle.net/1721.1/153882" rel="alternate"/>
<author>
<name>Nguyen, Thanh P. Q.</name>
</author>
<id>https://hdl.handle.net/1721.1/153882</id>
<updated>2024-03-22T03:10:03Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">NLP City
Nguyen, Thanh P. Q.
The rapid advancements in Artificial Intelligence (AI) have led to the development of complex and powerful models, resulting in opaque “black boxes” that hinder human understanding of their decision-making processes. This is especially true in the field of Natural Language Processing as large language models have become widely used and popularized in the form of chatbots and AI assistants. While there have been many attempts at explaining these models and concepts, most of them are directed at an audience already familiar with machine learning concepts. In this paper, I propose an approach to understanding existing concepts and models in NLP by simplifying them into intuitive narratives of towns and cities. By leveraging this more familiar context, the hope is to provide more engagement and information retention to non-technical audience members. The complete narrative can be found at nlp-city.vercel.app.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing Robust and Efficient Pseudo-transient Methods for Solving Neural Complementarity Problems in Julia</title>
<link href="https://hdl.handle.net/1721.1/153881" rel="alternate"/>
<author>
<name>Delelegn, Yonatan</name>
</author>
<id>https://hdl.handle.net/1721.1/153881</id>
<updated>2024-03-22T04:00:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Implementing Robust and Efficient Pseudo-transient Methods for Solving Neural Complementarity Problems in Julia
Delelegn, Yonatan
Traditional deep learning models typically consist of explicitly defined layers, such as fully connected and self-attention layers found in Transformers, which have been pivotal in recent advancements in computer vision and large language models. Selecting an appropriate architecture is critical for these models. However, even with optimal architecture, these models may fail to capture intricate relationships and dependencies within hidden states due to the inherent limitations of the chosen layers. Furthermore, in several scientific applications, particularly those simulating physical systems, there is a pressing need to integrate domain-specific knowledge into the modeling process, a task for which explicit neural networks may not be ideally suited.&#13;
&#13;
Recent studies, such as [2] and [4] have highlighted the potential of implicit layers in capturing more complex relationships and learning more stringent constraints than traditional neural networks. Beyond capturing intricate relationships, implicit layers offer the advantage of decoupling the solution process from the layer definition, thus facilitating faster training and the seamless integration of domain-specific knowledge. To enable implicit models to rival state-of-the-art performance, robust and efficient solvers are required for the forward pass. In this project, we focus on exploring stable and efficient solvers, specifically Pseudo-transient methods, for resolving neural complementarity problems. We aim to derive the sensitivity analysis of these problems, implement it in julia, and delve into the applications of differentiable complementarity problems in fields such as economics, game theory, and optimization.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Carrier Choice Models for Load Pricing in Digital Freight Platforms</title>
<link href="https://hdl.handle.net/1721.1/153880" rel="alternate"/>
<author>
<name>Li, Alexandra</name>
</author>
<id>https://hdl.handle.net/1721.1/153880</id>
<updated>2024-03-22T03:01:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Carrier Choice Models for Load Pricing in Digital Freight Platforms
Li, Alexandra
With the expansion of digital commerce and growth of the economy, the freight transportation scene has adapted to reflect such changes. Digital freight platforms, acting an intermediary between shippers and carriers, have gained traction to modernize the process and leverage technology to improve efficiency and increase the ease-of-use for all parties involved. Through their role in setting prices and presenting loads, these platforms can reduce the negative environmental impact of freight while simultaneously increasing the efficiency of carriers and satisfying the needs of shippers. The key challenge that these digital freight platforms face is understanding how carriers strategically select an action on the platform, which is difficult to capture despite having large amounts of data because naive estimation methods on historical data produce unrealistic results for different pricing methods.&#13;
&#13;
This thesis addresses this challenge by developing a simulation to evaluate the practicality of these estimates and iteratively revise the parameters based on constraints until they produce desirable results. In our research, we model the behavior through which carriers select a load to accept or reject with a 2-way latent class multinomial logit model. We tune the parameters of this model through a feedback loop where we perform a maximum likelihood estimate on the data to obtain model parameters, evaluate these parameters in the simulation, and use the results to perform a re-estimation to eventually obtain parameters that are both representative of the data and produce the expected results.&#13;
&#13;
We use this system to evaluate optimized pricing and load presentation methods. We experiment with bundling, or grouping a sequence of loads together to reduce the overhead time carriers spend finding suitable loads and to produce routes with less CO2 emissions. We solve for a mixed-integer linear program that maximizes the total utility of bundles proposed by the platform to generate few and non-overlapping bundles. We develop a dynamic programming based pricing method to generate carrier and time specific prices for bundles. We evaluate these methods in our model and analyze the effects of such methods on carrier interactions and behavior. Although these methods do not yet show a substantial decrease in freight carbon emissions, we have laid the groundwork for modeling this complex system and hope that future work can be done to reduce the negative environmental that the freight transportation sector leaves on this planet.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Event-Driven Distributed Task Orchestration System with Applications to Automated PCB Design</title>
<link href="https://hdl.handle.net/1721.1/153879" rel="alternate"/>
<author>
<name>Perez, Sergio A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153879</id>
<updated>2024-03-22T03:14:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Event-Driven Distributed Task Orchestration System with Applications to Automated PCB Design
Perez, Sergio A.
Printed circuit board (PCB) design is the process of taking a board schematic and design constraints and realizing a manufacturable design. Electronic Design Automation (EDA) software allows humans to manually design PCB’s by placing components and routing the electrical connections required. Allegro X AI by Cadence is a cloud-based tool that utilizes machine learning and optimization to automatically generate PCB designs.&#13;
&#13;
Microservice-based architectures have proven to be popular due to their flexibility and scalability. X AI’s current process for generating a printed circuit board design is monolithic with logically separate stages, making it difficult to support flexible configuration of the ordering of downstream stages or branching off the current design and attempting different versions of a stage by varying input parameters and constraints.&#13;
&#13;
In this thesis, we design a microservice-based architecture and orchestration system for automated PCB design. Our design structures the application as a directed acyclic graph (DAG) of microservices and achieves the following goals: decouples the stages of the design generation flow, supports flexible configuration and ordering of downstream stages, and brings the power of elastic compute from the cloud to the PCB design generation process.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ZeroWD: Supporting Zero-Waste Garment Design with Linked Edits</title>
<link href="https://hdl.handle.net/1721.1/153878" rel="alternate"/>
<author>
<name>Zhang, Ruowang</name>
</author>
<id>https://hdl.handle.net/1721.1/153878</id>
<updated>2024-03-22T04:12:57Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">ZeroWD: Supporting Zero-Waste Garment Design with Linked Edits
Zhang, Ruowang
In traditional garment manufacturing, the way fabrics are cut produces significant waste due to inefficiencies in the design and layout of the garment panels on fabric. Recently fashion designers have begun to explore different ways to design and layout garment panels in more efficient ways. An extreme example of this efficient fashion design process is zero waste fashion design, which aims to use all available fabric in the resulting garment. Currently many zero waste fashion designers manually cut out the 2D patterns and experiment with their 3D shape. With zero-waste design being inherently strictly constrained by the dimensions of the fabric, designers need to perform meticulous calculations for tasks as resizing and restyling. In our work, we propose ZeroWD, a novel interactive design tool that assists zero-waste fashion design by bringing pattern layout and cutting earlier in the design process. With our tool, designers can design zero waste garment panels and simulate the garment’s 3D shape with realtime feedback. By embedding zero-waste design constraints into the system, we enable designers to focus on the creative design rather than tedious constraint solving. Our user study demonstrates that ZeroWD can help fashion designers create garments with minimal waste.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing Steady-state and Post-transplant Blood System Dynamics with Computational Analysis and Lineage-tracing</title>
<link href="https://hdl.handle.net/1721.1/153874" rel="alternate"/>
<author>
<name>Kuoch, Michael K.</name>
</author>
<id>https://hdl.handle.net/1721.1/153874</id>
<updated>2024-03-22T03:32:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Probing Steady-state and Post-transplant Blood System Dynamics with Computational Analysis and Lineage-tracing
Kuoch, Michael K.
Bone marrow transplants are an important tool in modern medicine due to their ability to treat a wide range of diseases, spanning both cancerous and non-cancerous conditions. We aim to study the blood system dynamics using sequencing data from paired pre-transplant and post-transplant samples and look for potential expression profiles that may be biased toward successful bone marrow engraftment. We find that some genes have increased expression in post-transplant samples compared to pre-transplant samples. We also discuss using clonal lineage tracing to track cell clones throughout the transplant process and present some preliminary analyses.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, Manufacturing, and Experimental Validation of an Electric Machine for Aircraft Propulsion</title>
<link href="https://hdl.handle.net/1721.1/153873" rel="alternate"/>
<author>
<name>Andersen, Henry</name>
</author>
<id>https://hdl.handle.net/1721.1/153873</id>
<updated>2024-03-22T04:02:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Modeling, Manufacturing, and Experimental Validation of an Electric Machine for Aircraft Propulsion
Andersen, Henry
The work presented in this thesis is part of an effort at MIT to develop a 1-MW electric machine which achieves the specific power necessary for hybrid-electric aviation: 13 kW/kg [1]. The models for torque and core loss used in the design of the 1-MW machine are revised and expanded based on experimental results obtained from a partially-manufactured prototype to guide the design of future high specific-power electric machinery.&#13;
&#13;
To calculate the torque produced by the machine, the air-gap field created by a segmented Halbach array rotor is derived from Maxwell’s Equations. The closed-form solution for the air-gap field matches Finite Element Analysis (FEA) to within 1% and experimental data from the manufactured prototype to within the tolerance of the experiment. A method for modeling a slotted stator as a smooth cylinder with a surface current is applied to the stator of the 1-MW machine, and the average torque and torque ripple are calculated using the Lorentz-Kelvin force density. The analytical torque calculation computes 100,000 times faster than 2D FEA (0.56 ms vs. 44 s), and matches FEA to within 1.2%, making it ideal for initial machine design.&#13;
&#13;
An experimental procedure is developed to measure the core loss and B-H curve of an iron lamination stack. This procedure is applied to various toroid samples and a stack of slotted stator laminations. A conventional lamination bonding process is found to raise core loss by 20% for 0.1-mm iron-cobalt laminations. An alternative stator-core manufacturing process, which results in no impact on core loss, is identified and experimentally verified. Based on the measured core loss of a stack of stator laminations, the 1-MW prototype is expected to remain within the thermal limits imposed by the winding insulation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Kinetic Inductance Characterization of Thin 2H-NbSe₂ Superconductor Using Circuit Quantum Electrodynamics</title>
<link href="https://hdl.handle.net/1721.1/153871" rel="alternate"/>
<author>
<name>Zaman, Sameia</name>
</author>
<id>https://hdl.handle.net/1721.1/153871</id>
<updated>2024-03-22T03:46:30Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Kinetic Inductance Characterization of Thin 2H-NbSe₂ Superconductor Using Circuit Quantum Electrodynamics
Zaman, Sameia
Wedeveloped hybrid superconducting microwave resonators incorporating van der Waals (vdW) superconductors to explore the microwave response of superconducting 2D materials in the GHz regime. We first established a reliable technique to contact thin NbSe₂, entirely encapsulated with hexagonal Boron Nitride (hBN), with a coplanar Al resonator. Then we fabricated hybrid Al-NbSe₂ resonators and measured the kinetic inductance of thin NbSe₂ at low-temperature and low-photon number limits. In this thesis, we discuss the observed relation between the kinetic inductance and the thickness of the thin NbSe₂. Furthermore, we characterize DC bias current, and microwave power dependence of the kinetic inductance in the hybrid Al-NbSe₂ resonators. Our approach contributes to understanding the both DCand ACproperties of superconducting 2D materials with potential implications for their utilization in emerging technologies.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Land Material Geometry: Spline Construction with Invasive Species in a time of Water Crisis in the Colorado River Basin</title>
<link href="https://hdl.handle.net/1721.1/153870" rel="alternate"/>
<author>
<name>Marshall Jr, William D.</name>
</author>
<id>https://hdl.handle.net/1721.1/153870</id>
<updated>2024-03-22T03:43:25Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Land Material Geometry: Spline Construction with Invasive Species in a time of Water Crisis in the Colorado River Basin
Marshall Jr, William D.
This thesis speculates architectural systems that act in reciprocal and reparative relationship with the local environment and ecology rather than extractive means.  It suggests material sourcing tamarisk, an invasive species in southwestern desert river systems that exacerbate strains on water availability, thus, removing the plant, yet maintaining its sequestered carbon as construction material.  Active bending of this raw natural timber allows for a low tech means to approximate structural geometry for adobe construction.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Compositional Abstract Models Incrementally&#13;
for Efficient Bilevel Task and Motion Planning</title>
<link href="https://hdl.handle.net/1721.1/153869" rel="alternate"/>
<author>
<name>McClinton III, Willie B.</name>
</author>
<id>https://hdl.handle.net/1721.1/153869</id>
<updated>2024-03-22T03:58:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Compositional Abstract Models Incrementally&#13;
for Efficient Bilevel Task and Motion Planning
McClinton III, Willie B.
In robotic domains featuring continuous state and action spaces, planning in long-horizon task is fundamentally hard, even when the transition model is deterministic and known. One way to alleviate this challenge is to perform bilevel planning with abstractions, where a high-level search for abstract plans is used to guide planning in the original transition space. In this thesis, we propose an algorithm for learning predicates from demonstrations, eliminating the need for manually specified state abstractions. Our key idea is to learn predicates by optimizing a surrogate objective that is tractable but faithful to our real efficient-planning objective. We use this surrogate objective in a hill-climbing search over predicate sets drawn from a grammar, which we call predicate invention. However, our research highlights another limitation in current symbolic operator learning techniques. They often fall short in robotics scenarios where the robot’s actions result in numerous inconsequential alterations to the abstract state. This limitation arises mainly because these techniques aim to precisely predict every observed change in that state, and as the execution horizon grows longer so does the built up complexity of the predictions. In this thesis, we study this separately and introduce an innovative method where the operators are induced to selectively predict by focusing solely on changes crucial for abstract planning to meet specific subgoals, which we call our operator learning procedure. Our contributions include: a predicate invention procedure based on a hill-climbing search over predicate sets, and a planning-driven operator learning objective based on a hill-climbing search algorithm that only model changes necessary for abstract planning and preserve compositionality of operators. We evaluate learning predicates and operators across a few toy environments and dozens of tasks from the demanding BEHAVIOR-100 benchmark.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanofabrication of flexible thin-film bioelectronics for long-term stable neural signal recording</title>
<link href="https://hdl.handle.net/1721.1/153868" rel="alternate"/>
<author>
<name>Lee, Ariel J.</name>
</author>
<id>https://hdl.handle.net/1721.1/153868</id>
<updated>2024-03-22T03:09:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Nanofabrication of flexible thin-film bioelectronics for long-term stable neural signal recording
Lee, Ariel J.
Establishing a long-term stable and effective interface for brain is a significant milestone for all neural implants. Recent studies have demonstrated that tissue-level soft and flexible materials and devices can provide such stability for neural implants. Therefore, engineering suitable materials and developing fabrication methods for soft and flexible thin-film electric probes to further exploit their potential are essential to advancing the field. This thesis demonstrates the comprehensive methods required for developing electrical recording devices and for analyzing the acquired neural data. It presents the design, fabrication, and in vivo implantation of flexible thin-film electronic devices. The materials and fabrication processes are engineered to create structures that can more closely mimic the mechanical properties of brain tissue, in contrast to traditional stiff neural probes. The device designs in this work feature serpentine-shaped ribbons for stretchability and tetrode-like electrode configuration to enable the measurement of single-unit neural activities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SigPro: Enabling Subject Matter Expert Guidance in Feature Engineering</title>
<link href="https://hdl.handle.net/1721.1/153867" rel="alternate"/>
<author>
<name>Xu, Guanpeng Andy</name>
</author>
<id>https://hdl.handle.net/1721.1/153867</id>
<updated>2024-03-22T03:08:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">SigPro: Enabling Subject Matter Expert Guidance in Feature Engineering
Xu, Guanpeng Andy
In this thesis, we detail developments to SigPro, a feature engineering library in Python guided by Subject Matter Experts (SMEs). SigPro includes a suite of data processing building blocks, or primitives, as well as an algorithm to combine primitives to form feature engineering pipelines. These pipelines are in turn used to construct features for machine learning.&#13;
&#13;
SMEs, through a low-code interface, have several ways to dictate the feature engineering process. First, subject matter experts can construct a feature engineering pipeline for signal data simply by specifying a sequence of data transformations and aggregations (building blocks); SigPro then automatically composes a primitive graph and thus a feature engineering pipeline. Second, subject matter experts can also specify parameters and hyperparameters for each building block through SigPro’s user-friendly API. These methods encourage SMEs to incorporate their domain knowledge through informative feature transformations and carefully chosen parameter values.&#13;
&#13;
When existing building blocks fall short, SigPro facilitates efficient development of new primitives. To this end, we streamline the process for the contribution of new primitives while ensuring their seamless integration into existing pipelines. These improvements ensure that SigPro provides an intuitive yet effective solution where subject matter experts can leverage their domain knowledge to generate relevant, explanatory features that can greatly improve the performance of downstream predictive modeling.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bottom-Up Standardization For Data Preparation</title>
<link href="https://hdl.handle.net/1721.1/153866" rel="alternate"/>
<author>
<name>Lai, Eugenie Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/153866</id>
<updated>2024-03-22T03:59:57Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bottom-Up Standardization For Data Preparation
Lai, Eugenie Y.
Data preparation is an essential step in every data-related effort, from scientific projects in academia to data-driven decision-making in industry. Typically, data preparation is not the novel or interesting piece of a project — it transforms raw data into a format that enables further innovative work. Because data preparation scripts are never intended to be interesting, are project-specific, and are written in general-purpose languages, they can be tedious to understand and check. As a result, data preparation scripts can easily become a breeding ground for poor engineering and statistical practices.&#13;
&#13;
Ideally, data preparation scripts are “admirably boring” — they should serve the project, but otherwise be as simple and as standard as possible. We propose a bottom-up script standardization framework that takes a user’s data preparation script and transforms it into a simpler, more standardized, more boring version of itself. Our framework takes the user’s input script not as an unchangeable definition of correctness, but as a semantic sketch of the user’s overall intent. We present an algorithmic framework and implemented a prototype system. We evaluate our approach against state-of-the-art methods, including GPT-4, on six real-world datasets. Our approach improves script standardization by 39.5% while not meaningfully changing the user’s intent, while GPT-4 achieves 2.9%.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A New Framework for Refraction-Based Image Verification</title>
<link href="https://hdl.handle.net/1721.1/153865" rel="alternate"/>
<author>
<name>Simhon, Sage</name>
</author>
<id>https://hdl.handle.net/1721.1/153865</id>
<updated>2024-03-22T03:07:03Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A New Framework for Refraction-Based Image Verification
Simhon, Sage
We propose a novel approach to image verification that aims to unify optics, computer vision, computer graphics, and deep learning for active image protection. Our approach builds upon previous work involving placing spherical refractive objects in the scene that collectively act as a signature for authenticity, however we hypothesize that we can learn refraction models independent of scene and for arbitrary refractive objects. We develop a framework for learning such refraction models, where each model can be considered a key to authenticate an image or video. In this way, complex refraction models inherent to the physics of arbitrarily shaped objects can be used to increase security without requiring a closed form solution for their optical behavior. The approach involves scanning a laser over the scene and learning an image of its warping transformation by the refractive object. With a learned model, detecting and localizing manipulations in an image is accomplished by validating consistency between the primary, unverified image and a reconstruction based on the warped image in the object. This is demonstrated in simulation, using a photorealistic rendering engine to collect synthetic training data that captures real world behavior. We present both qualitative and quantitative results demonstrating the capabilities of our system, including computational speedups and practical improvements compared to prior work, as well as an analysis across different resolutions, model settings, and limiting factors. We demonstrate that with a sufficient sampling resolution, we can detect and localize content additions, content removals, and texture changes. Our key contribution is a novel integration of physical laws with deep learning in the context of image forensics. Further, the generalization introduced by our deep learning approach allows us to enhance image verification security.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Love in the Fast Lane: Not-so-new Models for American Stewardship and Preservation</title>
<link href="https://hdl.handle.net/1721.1/153864" rel="alternate"/>
<author>
<name>Gideonse, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/153864</id>
<updated>2024-03-22T03:38:47Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Love in the Fast Lane: Not-so-new Models for American Stewardship and Preservation
Gideonse, Lauren
From 1914 to 1919 American Portland Cement funded the construction of eight mile-long stretches of paved road across the Midwest, seedling miles, as part of the campaign to garner support for the  Lincoln Highway project. In internal memoranda, the company called these seedlings “object lessons.”  The seedlings, by creating a physical encounter of the space between the status quo and what could be, manufactured a desire in drivers. The incredibly successful campaign shifted responsibility for the road system to the public domain and cemented the road as a site of civic investment. It also invented a mode of experience that facilitates noticing but is not in response to failure or crisis.&#13;
&#13;
This thesis begins with one hundred and fifty sites across the United States – domestic buildings that are particularly old for their context – documented through two road trips. The road-trip as collection mechanism sets the terms: the road and the house are considered together. They inform and contextualize each other. The road is both a critical contemporary network of resources and people, and the historical agent of rationalizing, mobilizing, and capitalizing on the American landscape. The historic home is not  considered in a vacuum but always in time, in relationship to the landscape and through its frontage.  By looking carefully at these sites through tailored analytical tools, this thesis identifies tendencies, both at the time of construction and in the behavior of the buildings since, that reflect an alternate set of values from those that shape building and preservation practices today.&#13;
&#13;
From these sites the thesis composes, and in the process re-evaluates, the history of a house and the road. The objects of this research form ulterior narratives – derivative and projective – that cast an ill-fated romance between forms of stewardship and systems of capital. The results, a collection of slow media, construct and reconstruct encounters with an altered landscape. Like the seedling miles from which the contemporary American highway system grew, this thesis utilizes the “object lesson” as a mechanism to prompt reconsideration. The thesis puts forward a new stretch of seedling road to  manufacture a desire for not-so-new forms of stewardship and preservation that are both born-of and particular-to the  American context.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High A-scan rate optical coherence tomography angiography for blood flow speed quantification in the retina</title>
<link href="https://hdl.handle.net/1721.1/153860" rel="alternate"/>
<author>
<name>Hwang, Yunchan</name>
</author>
<id>https://hdl.handle.net/1721.1/153860</id>
<updated>2024-03-22T03:01:02Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">High A-scan rate optical coherence tomography angiography for blood flow speed quantification in the retina
Hwang, Yunchan
Optical coherence tomography angiography (OCTA) offers non-invasive and depth-resolved imaging of the retinal vasculature. While OCTA is widely used to study retinal disease, it traditionally provides limited information about blood flow speed. This thesis introduces a second-generation variable interscan time analysis (VISTA) OCTA, designed to evaluate a quantitative surrogate marker for blood flow speed in the vasculature. At the capillary level, spatially compiled OCTA and a simple temporal autocorrelation model, ρ(τ) = exp(-ατ), are used to evaluate a temporal autocorrelation decay constant, α, as a marker for blood flow speed. A 600 kHz A-scan rate swept-source OCT prototype instrument provides short interscan time OCTA and fine A-scan spacing acquisition, while maintaining multi mm2 field-of-views for human retinal imaging. The cardiac pulsatility in α is demonstrated and its synchronization across retinal capillaries is quantified. The repeatability of α measurements is evaluated at multiple spatial levels. This new approach reveals varying α values across different retinal capillary plexuses in healthy eyes, and demonstrates spatial correspondence between high blood flow speeds and the centers of choriocapillaris lobules. VISTA OCTA images of eyes with diabetic retinopathy and age-related macular degeneration are also presented. By providing blood flow speed information, the second-generation VISTA aims to enhance and complement traditional structural vasculature imaging offered by OCTA. These advancements promise to enable clinical studies of blood flow speed alterations in retinal diseases, offering earlier markers for disease detection, progression, and response to treatment.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microarchitecture Categorization and Pre-RTL Analytical Modeling for Sparse Tensor Accelerators</title>
<link href="https://hdl.handle.net/1721.1/153859" rel="alternate"/>
<author>
<name>Feldman, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/153859</id>
<updated>2024-03-22T03:17:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Microarchitecture Categorization and Pre-RTL Analytical Modeling for Sparse Tensor Accelerators
Feldman, Andrew
Specialized microarchitectures for exploiting sparsity have been critical to the design of sparse tensor accelerators. Sparseloop introduced the Sparse Acceleration Fea­ture (SAF) abstraction, which unifies prior work on sparse tensor accelerators into a taxonomy of sparsity optimizations. &#13;
&#13;
Sparseloop succeeds at analytical pre-RTL modeling of architecture-level metrics for sparse tensor accelerators, accurately capturing the beneficial impact of SAFs on overall design cost. However, Sparseloop lacks cost models for microarchitectural primitives and design topologies required for implementing SAFs (referred to in this work as "SAF microarchitectures".) &#13;
&#13;
Analysis of prior works shows that SAF microarchitectures may or may not con­stitute a significant overhead, depending on the particular design; thus it is desirable to have pre-RTL models which help anticipate SAF microarchitecture overheads. &#13;
&#13;
Building on the Sparseloop SAF abstraction, this work1 attempts to synthesize a number of prior works into a concise, unified, and effective framework for doing research on SAF microarchitectures. This overall framework comprises (1) a concep­tual framework which facilitates concise description and design-space exploration for SAF microarchitectures, (2) a software framework for compiling Sparseloop-style SAF descriptions into microarchitecture designs and analytical models, and (3) a compo­nent library including specific SAF microarchitecture subcomponent designs as well as RTL to support implementation.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preventing CSV Injection Attacks With A Browser Extension</title>
<link href="https://hdl.handle.net/1721.1/153858" rel="alternate"/>
<author>
<name>Dedhia, Ray</name>
</author>
<id>https://hdl.handle.net/1721.1/153858</id>
<updated>2024-03-22T03:13:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Preventing CSV Injection Attacks With A Browser Extension
Dedhia, Ray
CSV injection occurs when an attacker injects malicious code into a CSV file, and this code is executed when the file is opened in a spreadsheet program. This type of attack is possible because most spreadsheet programs have a set of built-in functions that run automatically when a CSV file is opened with the spreadsheet program. Given the widespread usage of CSV files and programs that interpret those CSV files, the risk posed by such CSV injection attacks is great.&#13;
&#13;
In this study, I present a browser extension designed to sanitize all downloaded CSV f iles by eliminating any harmful code while preserving the integrity of benign code. The extension does this by first finding all formulas within a CSV file, and determining whether or not each one has the potential to contain malicious code. If the extension determines that a formula may be malicious, it will edit the cell containing that formula so that spreadsheet programs will interpret the cell as text, and will not execute it.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Memory in Reinforcement-Learned Agents for Smarter Lateral Movement</title>
<link href="https://hdl.handle.net/1721.1/153857" rel="alternate"/>
<author>
<name>Johnson Schofield, Catherine</name>
</author>
<id>https://hdl.handle.net/1721.1/153857</id>
<updated>2024-03-22T03:12:11Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Exploring Memory in Reinforcement-Learned Agents for Smarter Lateral Movement
Johnson Schofield, Catherine
Computer networks are the backbone of most organizations’ technology infrastructure. Yet, they remain susceptible to many hidden vulnerabilities. One proactive approach to uncover and mitigate threats is red teaming. Red teams imitate hacking to find and exploit vulnerabilities in a network. This practice removes uncertainty about which parts of a network attackers could compromise. A central component of red teaming is lateral movement in which a red team operator moves through a network by traversing workspaces on that network. Each step in the lateral movement process requires careful decision-making given the information gleaned so far, consequences of past actions, and knowledge about workspaces on the network. The process is complex and typically requires years of experience for a red team operator to master.&#13;
&#13;
Automating red teaming with machine learning, and specifically reinforcement learning (RL), could help secure a domain more efficiently and allow operators to focus on higher-level decisions. However, unlike humans, traditional RL agents forget details from past experiences. This is a problem because remaining stealthy requires remembering consequences of past actions. By adding a memory architecture to the agent, the agent can remember these consequences and make better action choices in the lateral movement environment.&#13;
&#13;
I propose several variations of Long-Short-Term Memory (LSTM), transformers, and Hierarchical Chunk Attention Memory (HCAM), which help the agents to better remember past events inside a memory-enhanced lateral movement simulation. I compare the performance of a control agent, an RL agent with a linear neural network, to the performance of memory agents, RL agents with architectures capable of determining dependencies on past events. I test the agents on a control environment that does not include a memory task, and a memory environment that does.&#13;
&#13;
Agents with the memory architectures perform better than the control agent on the memory environment, at varying levels. I show that agents with an LSTM outperform the control agent on the memory environment by about 25%, matching the performance of the control agent on the control environment. While the HCAM and transformer agents do not perform as well as the LSTM agents, they still show the ability to slightly outperform the control agents on the memory environment. They also show potential for performing well in more generic memory tasks.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Cloud Database Performance: General-Purpose Compression and Workload-Driven Layout</title>
<link href="https://hdl.handle.net/1721.1/153856" rel="alternate"/>
<author>
<name>Piszczek, Miloslawa</name>
</author>
<id>https://hdl.handle.net/1721.1/153856</id>
<updated>2024-03-22T03:16:46Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Enhancing Cloud Database Performance: General-Purpose Compression and Workload-Driven Layout
Piszczek, Miloslawa
Cloud-based disaggregated database systems that divide data across a data layer and a storage layer connected by network calls are popular for analytical query loads. This thesis explores two topics critical to building performant systems of this type: space optimization and latency minimization.&#13;
&#13;
First, I propose ColumnConstruct- a general-purpose machine learning compression that uses a novel information-maximizing method for building input features. ColumnConstruct is competitive with existing ML compression methods for categorical data, but is not able to perform lossless compression on arbitrary tabular data. This limitation, as well as the additional compression and decompression latency, make it insufficient to improve query latency within a database management system. Next, I investigate whether workload-aware data layout combined with caching can improve query times without the need for ML-based compression or storage layer computation pushdown. I show that for small cache sizes and homogeneous query sets, a workload-aware layout combined with existing compression methods can be more effective than computation pushdown without reliance on particular features in the data storage layer.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fragments of Home: Domestic Businesswomen and Collective Motherhood</title>
<link href="https://hdl.handle.net/1721.1/153855" rel="alternate"/>
<author>
<name>Carriker, Bella Carmelita</name>
</author>
<id>https://hdl.handle.net/1721.1/153855</id>
<updated>2024-03-22T03:37:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fragments of Home: Domestic Businesswomen and Collective Motherhood
Carriker, Bella Carmelita
One in three children in the United States live in a single parent household; yet, the most likely demographic to experience eviction in the U.S. is low-income single mothers. This thesis proposes a framework for thinking about communal family structures, housing security, and intimate domestic space, through the lens of designing for single mother households in New York City. The housing crisis in cities across the country specifically affects single mothers and children, yet these identities are rarely explicitly designed for; economically, systemically and architecturally. &#13;
&#13;
Collections of oral histories— from single mothers in my life who have experienced housing insecurity— illustrate the fragments which make up the feeling of home, the ways that architectural detail can reflect motherhood, the need to inherently examine both domesticity and labor. These spatial fragments, in conjunction with research on existing zoning, planning, development, and affordable housing pathways, inform architectural possibilities for communal housing across three neighborhoods in New York City.&#13;
&#13;
In order to advocate for these kinds of architectural opportunities to exist and planning initiatives to be community specific, family specific; we have to be able to imagine what these collective structures visually look like, how architecture can facilitate a stable relationship between working and living for single mother households.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning to Update: Using Reinforcement Learning to Discover Policies for List Update</title>
<link href="https://hdl.handle.net/1721.1/153854" rel="alternate"/>
<author>
<name>Quaye, Isabelle A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153854</id>
<updated>2024-03-22T03:38:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning to Update: Using Reinforcement Learning to Discover Policies for List Update
Quaye, Isabelle A.
The use of machine learning models in algorithms design is a rapidly growing f ield, often termed learning-augmented algorithms. A notable advancement in this field is the use of reinforcement learning for algorithm discovery. Developing algorithms in this manner offers certain advantages, novelty and adaptability being chief among them. In this thesis, we put reinforcement learning to the task of discovering an algorithm for the list update problem. The list update problem is a classic problem with applications in caching and databases. In the process of uncovering a new list update algorithm, we also prove a competitive ratio for the transposition heuristic, which is a well-known algorithm for the list update problem. Finally, we discuss key ideas and insights from the reinforcement learning agent that hints towards optimal behavior for the list update problem.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pensieve: Microarchitectural Modeling for Security Evaluation</title>
<link href="https://hdl.handle.net/1721.1/153853" rel="alternate"/>
<author>
<name>Yang, Yuheng</name>
</author>
<id>https://hdl.handle.net/1721.1/153853</id>
<updated>2024-03-22T03:22:01Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Pensieve: Microarchitectural Modeling for Security Evaluation
Yang, Yuheng
Traditional modeling approaches in computer architecture aim to obtain an accurate estimation of performance, area, and energy of a processor design. With the advent of speculative execution attacks and their security concerns, these traditional modeling techniques fall short when used for security evaluation of defenses against these attacks.&#13;
&#13;
This thesis presents Pensieve, a security evaluation framework targeting early-stage microarchitectural defenses against speculative execution attacks. At the core, it introduces a modeling discipline for systematically studying early-stage defenses. This discipline allows us to cover a space of designs that are functionally equivalent while precisely capturing timing variations due to resource contention and microarchitectural optimizations. We implement a model checking framework to automatically find vulnerabilities in designs. We use Pensieve to evaluate a series of state-of-the-art invisible speculation defense schemes, including Delay-on-Miss, InvisiSpec, and GhostMinion, against a formally defined security property, speculative non-interference. Pensieve finds Spectre-like attacks in all those defenses, including a new speculative interference attack variant that breaks GhostMinion, one of the latest defenses.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Progress in Parallel Algorithms</title>
<link href="https://hdl.handle.net/1721.1/153852" rel="alternate"/>
<author>
<name>Tontici, Damian</name>
</author>
<id>https://hdl.handle.net/1721.1/153852</id>
<updated>2024-03-22T03:44:41Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Progress in Parallel Algorithms
Tontici, Damian
Parallel computing offers the promise of increased performance over sequential computing, and parallel algorithms are one of its key components. There has been no aggregated or generalized comparative analysis of parallel algorithms. In this thesis, we investigate this field as a whole. We aim to understand the trends in algorithmic progress, improvement patterns, and the importance and interactions of various commonly used metrics. We collect parallel algorithms solving problems in our set and analyze them. We look at four major themes: how parallel algorithms have progressed, including in relationship to sequential algorithms and parallel hardware; how the work and span of algorithms influence performance; how problem size and available parallelism affect performance; and what researchers’ observable priorities look like. We find that more problems have had parallel improvements than sequential ones since the ’80s, that most parallel algorithms don’t improve algorithmic complexities, and much more. This research is important for us to understand how the field of parallel algorithms has changed throughout time, and what it looks like now.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Souvenir for the Land of Pagodas</title>
<link href="https://hdl.handle.net/1721.1/153851" rel="alternate"/>
<author>
<name>Allen, Christopher H.</name>
</author>
<id>https://hdl.handle.net/1721.1/153851</id>
<updated>2024-03-22T03:43:57Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Souvenir for the Land of Pagodas
Allen, Christopher H.
In the capital cities of Myanmar (Burma)—present and previous—stand five government-sponsored pagodas, five gold-plated stars linking two architectural constellations. The first of these constellations is composed of the thousands of religious structures that punctuate the landscape of Myanmar, often called the “Land of Pagodas.” The second is composed of the monuments erected by the various regimes that have administered the country’s government since independence in 1948, each of which embodies its own formulation of national identity and history. Occupying this covalent position, these five pagodas are physical manifestations of an ongoing nationalist project of ethnic and religious homogenization that legitimizes itself through historicist narratives, militaristic violence, and the co-opting of religion in service of political power. They are artifacts of propaganda—tools for “propagating the faith”¹ of ethno-nationalism. &#13;
&#13;
As such, these buildings also embody many of the social and political forces that pressured the maternal side of my family to emigrate from the country in the early 1980’s, in order to avoid prejudice and persecution as members of marginalized ethnic and religious groups. This thesis therefore operates from a diasporic distance, and is informed by a perspective which lacks the privilege of nostalgia.&#13;
&#13;
Taking the five government-sponsored pagodas as its site of departure, this thesis approaches them as narrative media, and comprises a series of investigations into challenging monumental architecture and repurposing its narrative capacities. If these architectural forms function as narrative tools of the state, how can they be claimed in order to tell alternate stories?&#13;
&#13;
This thesis approaches memory(s) as an inheritance, augmenting personal and ancestral narratives that have been excised from a history whose authority is predicated on their exclusion. It considers historiography as a process of multiplicity—even dissensus—and proposes the diasporic souvenir as a mechanism for disrupting narrative regimes of power. &#13;
&#13;
¹ “Propaganda” (n.), from New Latin &#120369;&#120371;ō&#120369;ā&#120360;&#120354;&#120367;&#120357;&#120354;, short for &#120330;&#120368;&#120367;&#120360;&#120371;&#120358;&#120360;ā&#120373;&#120362;ō &#120357;ē &#120343;&#120371;ō&#120369;ā&#120360;&#120354;&#120367;&#120357;ā &#120333;&#120362;&#120357;ē, “Congregation for Propagating the Faith.” &#120342;&#120377;&#120359;&#120368;&#120371;&#120357; &#120328;&#120357;&#120375;&#120354;&#120367;&#120356;&#120358;&#120357; &#120339;&#120358;&#120354;&#120371;&#120367;&#120358;&#120371;’&#120372; &#120331;&#120362;&#120356;&#120373;&#120362;&#120368;&#120367;&#120354;&#120371;&#120378;, 10th ed. (Oxford: Oxford University Press, 2020), s.v. “Propaganda.”
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative and Discriminative Models in Phase Transition Prediction</title>
<link href="https://hdl.handle.net/1721.1/153849" rel="alternate"/>
<author>
<name>Zhang, Difei</name>
</author>
<id>https://hdl.handle.net/1721.1/153849</id>
<updated>2024-03-22T03:51:20Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Generative and Discriminative Models in Phase Transition Prediction
Zhang, Difei
Accurate prediction of critical temperatures in phase transitions is crucial for understanding physical systems. Generative and discriminative models offer promising yet distinct approaches. Considering varying knowledge levels of the system, accessible data amounts, and computation resources of the experiments, these methods exhibit different accuracy and efficiency. This study aims to comprehensively compare six methods for predicting critical temperatures in the Ising lattice. Leveraging Julia’s capabilities will enable efficient parallel computation and benefit from its robust scientific machine learning ecosystem. The evaluation will focus on their performance concerning error rates, computation time, and required data. The goal is to guide researchers in selecting the optimal method within data and computational constraints for precise critical temperature estimation in complex physical systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Frictitious Matters</title>
<link href="https://hdl.handle.net/1721.1/153848" rel="alternate"/>
<author>
<name>Amstutz, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/153848</id>
<updated>2024-03-22T04:05:26Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Frictitious Matters
Amstutz, Caroline
Wood arrives on site abstracted into rectangular studs; steel beams, once a mineral soup, are extrusions with patented silhouettes; and stone is severed from time, processed into thin shiny slabs. We’ve manipulated our terrestrial matter to conform to smooth expectations: building materials are homogenous, standard, orthogonal, drawable, and specifiable. We live in the modern fantasy of “ frictionlessness,” where material becomes product and smoothness lubricates the flow of capital. Today architects don’t craft, but rather we specify.&#13;
&#13;
Granite, unlike processed ‘plastic’ materials, resists the abstraction of typical architectural production. It is too hard, too heavy, and too heterogeneous for specification. I argue that granite’s high-friction properties – if carefully understood and deliberately worked with – pose new design potentials. Granite’s microstructure causes it to cleave, or split, almost orthogonally. It's surface of crystals self-interlocks, allowing for jamming. And its high mass and friction cause it to pile with a 45-degree angle of repose.&#13;
&#13;
Yet, we would sooner expend immense energy to downgrade granite from a 230-newton piece of stone to a 40-newton piece of concrete than embrace the design potentials of aplasticity. ⁰ Abandoned for its “nuisance” properties, granite has been relegated to the realm of finish.&#13;
&#13;
Friction-intolerant and smoothness-obsessed, we are estranged from our materials. This thesis presents a methodology to reconsider architectural material culture through the embrace of aplastic material. Material properties are not incidental or inconvenient, but rather invitations for co-authorship. Working directly with Barre Gray™ granite through mock-ups, miniatures, and models, I offer a craft-optimized slowness, implanting the architect in streams of “ waste,” rather than extraction, to co-design with a “difficult” material.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Operator Models as Applied to Fluid Flow Systems and Real Ocean Dynamics</title>
<link href="https://hdl.handle.net/1721.1/153847" rel="alternate"/>
<author>
<name>Rajagopal, Ellery M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153847</id>
<updated>2024-03-22T03:49:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Neural Operator Models as Applied to Fluid Flow Systems and Real Ocean Dynamics
Rajagopal, Ellery M.
Data-driven, deep-learning modeling frameworks have been recently developed for forecasting time series data. Such machine learning models may be useful in multiple domains including the atmospheric and oceanic ones, and in general, the larger fluids community. The present work investigates the possible effectiveness of such deep neural operator models for reproducing and predicting classic fluid flows and simulations of realistic ocean dynamics. We first briefly evaluate the capabilities of such deep neural operator models when trained on a simulated two-dimensional fluid flow past a cylinder. We then investigate their application to forecasting ocean surface circulation in the Middle Atlantic Bight and Massachusetts Bay, learning from high-resolution data-assimilative simulations employed for real sea experiments. We confirm that trained deep neural operator models are capable of predicting idealized periodic eddy shedding. For realistic ocean surface flows and our preliminary study, they can predict several of the features and show some skill, providing potential for future research and applications.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Large Language Model Routing with Benchmark Datasets</title>
<link href="https://hdl.handle.net/1721.1/153846" rel="alternate"/>
<author>
<name>Ou, Anthony C.</name>
</author>
<id>https://hdl.handle.net/1721.1/153846</id>
<updated>2024-03-22T04:06:30Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Large Language Model Routing with Benchmark Datasets
Ou, Anthony C.
There is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, no single model typically achieves the best accuracy in all tasks and use cases. With a new dataset, it can be difficult to determine which LLM is best suited to the task. In this work we will address the challenges associated with selecting the best LLM model out of a collection for a new task. To do so, benchmark datasets are repurposed to learn a “router” model for this LLM selection, such that the “router” model will solve a collection of binary classification tasks. This work will demonstrate the utility and limitations of learning model routers from various benchmark datasets, where performance is improved upon using any single model for all tasks.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Personalized Treatment Response Prediction under Dynamic and Time-Varying Treatment Strategies for Sepsis Patients</title>
<link href="https://hdl.handle.net/1721.1/153844" rel="alternate"/>
<author>
<name>Su, Megan</name>
</author>
<id>https://hdl.handle.net/1721.1/153844</id>
<updated>2024-03-22T03:15:37Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Personalized Treatment Response Prediction under Dynamic and Time-Varying Treatment Strategies for Sepsis Patients
Su, Megan
Sepsis is a life-threatening medical emergency in which the body responds improperly to an infection, and is typically treated with intravanous fluids and vasopressors. However, administering the right balance is often difficult because adverse outcomes can be caused by both excessive and insufficient treatment. There have been many clinical trials done in the past to investigate the optimal regime for treating sepsis, however these studies have resulted in inconclusive results and often take a long time to conduct. Thus, personalized treatment response prediction under dynamic time-varying treatment strategies can be a very useful tool for clinicians when deciding what treatment strategy to administer to a patient. &#13;
&#13;
This thesis builds on G-Net, a deep sequential modeling framework for g-computation that has been evaluated on response prediction under dynamic and time-varying strategies on the population level. Utilizing real-world data collected from the intensive care unit (ICU), we evaluate the performance of various deep learning implementations of G-Net on individual-level response prediction and compare their performances on prediction under the observational treatment regime. We then apply G-Net to counterfactual prediction under alternative regimes of interest and show that G-Net is able predict patient covariates and outcomes that are physiologically plausible and match clinical intuition. Our work showcases the potential of G-Net as a tool for personalized treatment response prediction to aid clinicians in determining optimal therapy for sepsis patients in the ICU.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Archi-Culture or Agri-Tecture: The Garden in The Machine</title>
<link href="https://hdl.handle.net/1721.1/153843" rel="alternate"/>
<author>
<name>Brazier, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/153843</id>
<updated>2024-03-22T03:17:48Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Archi-Culture or Agri-Tecture: The Garden in The Machine
Brazier, Justin
Since the late 19th century, Urban Agriculture has served its respective context in more ways than just food production. The Urban farm became the center of community, essential democratic space where neighbors from all walks of life could share stories, recipes, farming practices and resources with one another. Tackling a main aspect of a people’s basic needs, urban farms and the sharing of agriculture is an essential act of selfreliance, self-preservation, and resistance. Planting their identities into the earth, this urban landscape has also become a reflection of culture, passing of tradition, and a connection to a homeland some immigrants could not get back to, but through native culinary practices and the ability to grow foods of familiarity, people have been able to carry their history with them. In this sense, on a small scale the community garden has become a central node of urban civic exchange. On the precipice of exponential growth, the necessity for urban grow space has never been more pressing. Moving beyond our typical urban agriculture typologies, the implementation of year-round growth can and has proven to expand the output of existing urban land already dedicated to closing the gap between where we grow our food and where we need it most. Interior urban growth spaces have recently been on the rise but have been typically implemented under the currently limited typology that is the standard ready-made greenhouse or the hyper productive food lab. Both missing the essence of what made urban agriculture different from rural, the people. This essential ethos that has been baked into the urban farm has not yet translated into the urban greenhouse, a largely generic structure that has provided farmers and communities the opportunity to increase crop yield and expand operations beyond their normal seasons. A key innovation, as food security within urban contexts becomes a more prevalent issue, but this expansion of production has come at the expense of the atmosphere that has made the urban farm what it is, the legibility of authorship, of collaboration, of identity. The greenhouse kit is a generic solution, cheap, easy to construct and pre-engineered, making it the obvious choice for more grassroots efforts that urban agricultural endeavors tend to be. But can we take a greenhouse kit that exists everywhere and develop a reconstruction so that it reacts to the constraints of its location? What can it hold to take on the dual identity of the garden? If we are going to move the greenhouse into the city, do we have to ask it to do more than just produce food? With access to infrastructure, can we push the community farm to its full potential, a community center?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Old Is Now?</title>
<link href="https://hdl.handle.net/1721.1/153842" rel="alternate"/>
<author>
<name>Giorgis, Adriana</name>
</author>
<id>https://hdl.handle.net/1721.1/153842</id>
<updated>2024-03-22T03:04:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">How Old Is Now?
Giorgis, Adriana
L’Aquila is a city without new buildings. Founded in the early 13th century on a fault line, the city has been destroyed by earthquakes every three hundred years. Its buildings are repaired on the same cycle. In l’Aquila, acts of construction and maintenance are one and the same. Through centuries, buildings in l’Aquila have been reinforced with punctual, visible, acts of support. Tension ties, corner stones, and thickened walls are the language of  architecture, producing both aesthetic and spatial implications. In this city, to maintain is to remake, to build is to preserve, to care is to create. When life and life-expectancy of structures is literally infinite, there can be no differentiation between repair and construction.&#13;
&#13;
This project dwells on l’Aquila’s architectural value systems. The absence of ‘new’ buildings in the city is made possible by a culture of collective acts of repair. In the long-now, kindnesses reinforce, prop-up, and adjust materials that have bore witness to historical events and familial genealogies. What might it mean for the discipline to center maintenance the way it has been centered in l’Aquila? What are the ways that the architect-maintainer conceives of originality? Of design? How, too, might they care for the ongoing present and future of l’Aquila?
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D Self-Localization of Drones using a Single Millimeter-Wave Anchor</title>
<link href="https://hdl.handle.net/1721.1/153836" rel="alternate"/>
<author>
<name>Lam, Maisy Lilian</name>
</author>
<id>https://hdl.handle.net/1721.1/153836</id>
<updated>2024-03-22T03:33:23Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">3D Self-Localization of Drones using a Single Millimeter-Wave Anchor
Lam, Maisy Lilian
We present the design, implementation, and evaluation of MiFly, a self-localization system for autonomous drones that works across indoor and outdoor environments, including low-visibility, dark, and GPS-denied settings.&#13;
&#13;
MiFly performs 6DoF self-localization by leveraging a single millimeter-wave (mmWave) anchor in its vicinity- even if that anchor is visually occluded. MmWave signals are used in radar and 5G systems and can operate in the dark and through occlusions. MiFly introduces a new mmWave anchor design and mounts light-weight high-resolution mmWave radars on a drone. By jointly designing the localization algorithms and the novel low-power mmWave anchor hardware (including its polarization and modulation), the drone is capable of highspeed 3D localization. Furthermore, by intelligently fusing the location estimates from its mmWave radars and its IMUs, it can accurately and robustly track its 6DoF trajectory&#13;
&#13;
 We implemented and evaluated MiFly on a DJI drone. We demonstrate a median localization error of 7cm and a 90th percentile less than 15cm, even when the anchor is fully occluded (visually) from the drone.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Earth Mission Control: A Virtual Reality Platform for Bridging the Climate Science Communication Gap</title>
<link href="https://hdl.handle.net/1721.1/153834" rel="alternate"/>
<author>
<name>Cherner, Phillip</name>
</author>
<id>https://hdl.handle.net/1721.1/153834</id>
<updated>2024-03-22T04:01:34Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Earth Mission Control: A Virtual Reality Platform for Bridging the Climate Science Communication Gap
Cherner, Phillip
Data visualizations are incredibly powerful tools for engaging users with increasingly complex and unfamiliar information about the Earth’s changing climate, yet scientists often use only one tool or modality to communicate their ideas about climate data, such as two-dimensional figures and graphs. With the rise of commercially available virtual reality (VR), we can leverage the affordances of the immersive technology to help integrate multiple modalities into a cohesive experience. In this thesis, I will present the design and implementation of the Earth Mission Control (EMC), an immersive multi-user VR data visualization platform designed to enable scientists and educators to more effectively communicate their data-driven stories of climate impacts to policymakers and community members to help them deepen their understanding of their community and the climate impacts that they are facing. The EMC combines existing visualization modalities such as NASA’s Hyperwalls, spherical projections (e.g., NOAA’s Science on a Sphere), map tables, virtual environments, 360 video, and human scale immersive experiences into an engaging and highly interactive VR environment, leveraging each of the modalities’ unique strengths. The design and creation of an AI-powered virtual assistant is also described as a way to add increased immersion, more natural interactions, and increased presence. Initial testing of potential effectiveness of the platform in providing a deeper understanding of localized climate issues and available adaptation strategies and personal actions are also discussed.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Changing the Course: Reimagining Switzerland’s Aging Nuclear Infrastructure</title>
<link href="https://hdl.handle.net/1721.1/153833" rel="alternate"/>
<author>
<name>Reinhard, Ellen Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/153833</id>
<updated>2024-03-22T03:35:32Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Changing the Course: Reimagining Switzerland’s Aging Nuclear Infrastructure
Reinhard, Ellen Marie
Countries worldwide have been experiencing a rise in the number of decommissioned nuclear power plants due to the infrastructure’s finite lifespan, ranging from 20 to 60 years. Consequently, nearly all of today’s global operating 410 nuclear power plants will soon reach their operating end of life, with an additional 263 already having ceased operations. Of those, only a few have attempted to repurpose them with programs aimed at reintegrating the isolated site into its existing context. This thesis proposes to change that course by reimagining alternative ways of adaptively reusing the remaining infrastructural buildings to facilitate the process of reconnection.&#13;
&#13;
The thesis centers on Switzerland, home to some of the world’s oldest nuclear power plants. One of them, based in Mühleberg, is the only decommissioned nuclear power plant in Switzerland to date and is therefore a pioneer to this process. The lengthy 15-year and costly $3.2Bn USD process dedicated to the safe nuclear fuel removal and building demolition lasts until 2034. Following that, the remaining greenfield, currently surrounded by agricultural land, would be available for new purposes.&#13;
&#13;
The proposal imagines transforming the nuclear power plant in Mühleberg into an accessible pumped hydro storage system for energy storage. In addition, indoor hydroponics and outdoor agricultural land serve as extensions for the longstanding agricultural community. Beyond economic uses, recreational spaces are dispersed throughout the site for larger community engagement and participation.&#13;
&#13;
Zooming back out to the larger picture of aging nuclear energy infrastructure, this thesis uses the Mühleberg narrative on other affected sites globally. It also reflects on potential opportunities that arise when considering scalability.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Online Auctions with Multiple Items</title>
<link href="https://hdl.handle.net/1721.1/153829" rel="alternate"/>
<author>
<name>Zhang, Wei</name>
</author>
<id>https://hdl.handle.net/1721.1/153829</id>
<updated>2024-03-22T03:20:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Online Auctions with Multiple Items
Zhang, Wei
Motivated by a recent switch of online ad exchanges from second-price auctions to firstprice auctions, this thesis studies computational problems related to how an advertiser can select bids to maximize her cumulative reward when participating in a sequence of single-item f irst-price auctions, or a sequence of several first-price auctions that take place in parallel. In particular, we study the problem of regret minimization in this setting, extending prior work for second-price auctions. We show that sub-linear regret cannot be achieved when the values are continuous and there are two or more single-item auctions that take place per round. On the other hand, we show that if the values are discretized the regret can be made to grow sublinearly, and this can be attained computationally efficiently using a best-response oracle. Finally, when there is a single first-price auction per round, we can attain tight regret bounds in two settings where additional information is available, in the form of hints, about the opponent bids.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Emergent Gaits with Decentralized Phase Oscillators:&#13;
on the role of Observations, Rewards, and Feedback</title>
<link href="https://hdl.handle.net/1721.1/153828" rel="alternate"/>
<author>
<name>Zhang, Jenny L.</name>
</author>
<id>https://hdl.handle.net/1721.1/153828</id>
<updated>2024-03-22T03:03:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Emergent Gaits with Decentralized Phase Oscillators:&#13;
on the role of Observations, Rewards, and Feedback
Zhang, Jenny L.
We present a minimal phase oscillator model for learning quadrupedal locomotion. Each of the four oscillators is coupled only to itself and its corresponding leg through local feedback of the ground reaction force, which we interpret as an observer feedback gain. The oscillator itself is interpreted as a latent contact state-estimator. Through a systematic ablation study, we show that the combination of phase observations, simple phase-based rewards, and the local feedback dynamics induces policies that exhibit emergent gait preferences, while using a reduced set of simple rewards, and without prescribing a specific gait.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Open-Set Object Based Data Association</title>
<link href="https://hdl.handle.net/1721.1/153827" rel="alternate"/>
<author>
<name>Magoun, Tim Y.</name>
</author>
<id>https://hdl.handle.net/1721.1/153827</id>
<updated>2024-03-22T04:05:46Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Open-Set Object Based Data Association
Magoun, Tim Y.
Representing the world using sparse objects allows for compact and semantically meaningful maps in simultaneous localization and mapping (SLAM). Traditionally, object detectors trained on a specific set of objects, such as the YCB objects, are used to provide input to the data association problem, which limits the scope of the system to environments that it has been trained on. With advancements in foundational models, we can extend this representation for objects that are not known a priori and do not have a labeled category during training. This thesis explores a system that creates data associations between open-set objects using an RGB-D camera and how it is used in a sparse object SLAM system. We show comparable trajectory performance to traditional SLAM systems while being more adaptable to out-of-distribution objects.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Railroad line capacity, scheduling, and dispatching models : state-of-the-art and possible extensions</title>
<link href="https://hdl.handle.net/1721.1/153807" rel="alternate"/>
<author>
<name>Little, Patrick.</name>
</author>
<id>https://hdl.handle.net/1721.1/153807</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">Railroad line capacity, scheduling, and dispatching models : state-of-the-art and possible extensions
Little, Patrick.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1982; Bibliography: leaves 104-105.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laminar head-on flame quenching in a spherical combustion bomb</title>
<link href="https://hdl.handle.net/1721.1/153806" rel="alternate"/>
<author>
<name>Sellnau, Mark Charles.</name>
</author>
<id>https://hdl.handle.net/1721.1/153806</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Laminar head-on flame quenching in a spherical combustion bomb
Sellnau, Mark Charles.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1981; Includes bibliographical references.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and evaluation of an electrocutaneous dynamic phantom sensation</title>
<link href="https://hdl.handle.net/1721.1/153805" rel="alternate"/>
<author>
<name>Serocki, John Harvey.</name>
</author>
<id>https://hdl.handle.net/1721.1/153805</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1981-01-01T00:00:00Z</published>
<summary type="text">Development and evaluation of an electrocutaneous dynamic phantom sensation
Serocki, John Harvey.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1981; Includes bibliographical references.
</summary>
<dc:date>1981-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nuclear waste reprocessing and disposal for Iran : an assessment.</title>
<link href="https://hdl.handle.net/1721.1/153801" rel="alternate"/>
<author>
<name>Sinaki, Ali Mohammad.</name>
</author>
<id>https://hdl.handle.net/1721.1/153801</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Nuclear waste reprocessing and disposal for Iran : an assessment.
Sinaki, Ali Mohammad.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1977; Includes bibliographical references.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weight and cost impact of large stand off distances on ships.</title>
<link href="https://hdl.handle.net/1721.1/153799" rel="alternate"/>
<author>
<name>Sims, Philip Johns.</name>
</author>
<id>https://hdl.handle.net/1721.1/153799</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Weight and cost impact of large stand off distances on ships.
Sims, Philip Johns.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1977; Bibliography : leaves 166-167.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automatic Baleen Whale Detection and 2D Localization Using a Network of Unsynchronized Passive Acoustic Sensors</title>
<link href="https://hdl.handle.net/1721.1/153797" rel="alternate"/>
<author>
<name>Goldwater, Mark Harry</name>
</author>
<id>https://hdl.handle.net/1721.1/153797</id>
<updated>2024-03-16T03:25:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Automatic Baleen Whale Detection and 2D Localization Using a Network of Unsynchronized Passive Acoustic Sensors
Goldwater, Mark Harry
Underwater acoustics is a powerful tool for learning about the ocean's soniferous marine life. However, most modern acoustic sensing systems consist of expensive arrays of time-synchronized recorders which require a crewed research vessel and significant expertise to deploy, operate, and recover. Recently, there has been a growing corpus of research related to algorithms for low-cost and accessible acoustic hardware. Deep learning methods have shown great promise when applied to underwater acoustics inverse problems. While many signal processing or physics-based algorithms exhibit long run times and require manual labor to extract signals of interest, tune parameters, as well as visually verify the results, an appropriately trained neural network can quickly process data with no human supervision. Both low-cost passive acoustic monitoring (PAM) sensing platforms and algorithms that can analyze massive amounts of raw data are critical to accessible and scalable approaches in ocean acoustic monitoring.&#13;
&#13;
This thesis presents a method for detection and 2D (latitude-longitude) localization of underwater acoustic sources without requiring synchronized sensors. The signals of interest here are the dispersive low-frequency impulsive gunshot vocalizations of North Pacific and North Atlantic right whales (NPRWs, NARWs). In shallow-water channels, the time-frequency representation of the received signal is strongly dependent on source-receiver range, making these impulses ideal candidates for range-based localization. The first step in the localization pipeline uses a temporal convolutional network (TCN) to simultaneously detect gunshot vocalizations and predict their ranges. Trained on spectrograms of synthetic data simulated in a variety of environments, the TCN is applied to PAM data from moorings in the Bering Sea. Gunshots are detected with high precision, and the range estimates are comparable to those estimated using traditional physics-based processing. Both methods use a minimal set of a priori environmental information including water column depth, sound speed, and density.&#13;
&#13;
Depending on the sensor layout, the TCN may need to scan large windows of data, so the number of unique acoustic sources is unknown. To automatically associate and localize range measurements, the proposed method seeks subsets of measurements across unique sensors which are internally consistent. For every considered measurement subset, locations are estimated with single constituent measurements left out and checked to be sufficiently close to the excluded measurement's set of potential locations. If a measurement subset is entirely consistent in this manner, the measurements are added as neighboring nodes in a graph-based representation, and strongly connected components are used to determine data associations and calculate the final source location estimates. Informed by the methods developed in this thesis, an array of low-cost TOSSIT moorings was deployed in Cape Cod Bay and used to collect experimental PAM data. The localization results are comparable to another similar physics-based inversion approach. Overall, this thesis aims to fill a gap in acoustic data processing methods where data from a low-cost network of unsynchronized acoustic sensors are fused to localize acoustic sources. The presented methods and data processing pipeline demonstrate the great potential of low-cost acoustic sensing systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Social Dilemmas in Multi-Agent Reinforcement Learning with Formal Contracting</title>
<link href="https://hdl.handle.net/1721.1/153795" rel="alternate"/>
<author>
<name>Christoffersen, Phillip Johannes Kerr</name>
</author>
<id>https://hdl.handle.net/1721.1/153795</id>
<updated>2024-03-16T03:12:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Mitigating Social Dilemmas in Multi-Agent Reinforcement Learning with Formal Contracting
Christoffersen, Phillip Johannes Kerr
As society deploys more and more sophisticated artificial intelligence (AI) agents, it will be increasingly necessary for such agents, while pursuing their own objectives, to coexist in common environments in the physical or digital worlds. This may pose a challenge if the agents’ objectives conflict with each other– in the worst case, this can prevent any given agent from being able to fulfill their own objectives (e.g. self driving cars in a traffic jam). Situations such as these are termed social dilemmas. &#13;
&#13;
In this thesis, it is demonstrated that providing RL agents with the software infrastructure to precommit to zero-sum incentive modifications &#13;
&#13;
1. Induces maximal social welfare in theory; and &#13;
2. When implemented with deep multi-agent reinforcement learning (MARL), also avoids social dilemmas in practice.&#13;
&#13;
Specifically, a novel algorithmic framework is proposed, termed formal contracting, which is formalized, studied game-theoretically, and investigated empirically. In formal contracting, before engaging in a given shared environment, agents are given the opportunity negotiate a binding modification to all agents’ objective functions, in order to provide incentives for the optimal use of shared resources. Within this framework, at all subgame-perfect equilibria (SPE), agents will in fact maximize social welfare, that is, the sum of all agent objectives in the original environment. Moreover, studies in simple domains, such as the classic prisoner’s dilemma, and more complex ones such as dynamic simulations of pollution management, show that this algorithmic framework can be implemented in MARL, and does indeed lead to outcomes with superior welfare in social dilemmas. This thesis concludes with discussions of related work, limitations of the approach, and future work, particularly involving scaling this methodology to larger problem instances containing more agents than studied.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Monte Carlo Methods for Motion Planning and Goal Inference</title>
<link href="https://hdl.handle.net/1721.1/153789" rel="alternate"/>
<author>
<name>Kondic, Jovana</name>
</author>
<id>https://hdl.handle.net/1721.1/153789</id>
<updated>2024-03-16T03:14:29Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Monte Carlo Methods for Motion Planning and Goal Inference
Kondic, Jovana
Human cognition exhibits remarkable abilities in reasoning about the plans of others. Even infants can swiftly generate effective predictions from minimal observations. This capability largely stems from our ability to employ specific assumptions about others’ decision-making, while considering potential alternative interpretations that align with reality. Such versatility is particularly crucial in navigation tasks, where multiple strategies exist for avoiding obstacles and reaching a target location. A sophisticated autonomous system should, therefore, be capable of: (1) acknowledging the inherent uncertainty in various obstacle avoidance strategies; and (2) predicting motion plans in a way that recognizes the different possibilities in a given goal-driven navigation scenario. To address these needs, we introduce a framework that captures the stochastic nature of motion planning and prediction through Monte Carlo sampling techniques. We ensure (1) by shifting the focus from pure trajectory optimization to generating a variety of near-optimal paths, and achieve (2) by developing a prediction method capable of capturing the inherent multimodality in the distribution over goal-driven trajectories. For the former, we utilize Markov Chain Monte Carlo (MCMC) methods to obtain trajectory samples that approximate the Boltzmann distribution, a common model for approximate rationality, which incorporates a cost function derived from trajectory optimization literature. For the latter, we develop a Bayesian model of the observed agent, and utilize Bayesian inference to reason about the underlying end goals of their movement. We propose a sequential Monte Carlo method that adapts the MCMC trajectory sampling to construct plausible hypotheses about the agent’s motion plan and then updates these hypotheses in real-time with new observations. In experiments conducted within continuous, obstacle-laden environments, we demonstrate our framework’s effectiveness for both diversity-aware motion planning and robust inference of latent goals from partial, noisy observations.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Phase Retrieval: A Robust and Efficient Multidimensional Phase Retrieval Algorithm</title>
<link href="https://hdl.handle.net/1721.1/153788" rel="alternate"/>
<author>
<name>Brabec, Cole</name>
</author>
<id>https://hdl.handle.net/1721.1/153788</id>
<updated>2024-03-16T03:30:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Fast Phase Retrieval: A Robust and Efficient Multidimensional Phase Retrieval Algorithm
Brabec, Cole
We present the first phase retrieval algorithm with a set of deterministic recovery guarantees. We show that for a class of objects known as "Schwarz Objects", the algorithm is guaranteed to reconstruct the object given only the magnitudes of its discrete Fourier transform. We present numerical evidence that the algorithm additionally succeeds quite often for non-Schwarz objects. We also present a set of measurement matrices for which the algorithm is guaranteed to recover any object. We derive the algorithm by converting instances of the phase-retrieval problem to the Schwarz problem and refine the solution with local optimization. The result is an algorithm that is fast, universal and robust against noise.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Perturbation-invariant Speech Representation Learning by Online Clustering</title>
<link href="https://hdl.handle.net/1721.1/153784" rel="alternate"/>
<author>
<name>Chang, Heng-Jui</name>
</author>
<id>https://hdl.handle.net/1721.1/153784</id>
<updated>2024-03-16T04:08:33Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Perturbation-invariant Speech Representation Learning by Online Clustering
Chang, Heng-Jui
Despite success across various tasks, self-supervised speech models face significant challenges in enhancing content-related performance with unlabeled data, requiring substantial computational resources. Meanwhile, learning from clustered discrete units has been shown to facilitate accurate phonetic representations. Thus, this thesis investigates speaker and noise-invariant speech representations. First, Speaker-invariant Clustering (Spin) is proposed to extract content representations through online clustering and speaker-invariant cross-view prediction. Second, Robust Spin (R-Spin) is devised to extend Spin to handle more distorted speech signals by leveraging acoustic pieces. Furthermore, this thesis includes a diverse set of evaluation and visualization techniques to quantitatively and qualitatively analyze the perturbation invariability of the proposed methods. This thesis offers approaches to producing perturbation-invariant speech representations and deeply investigates the characteristics of the learned representations, providing insights into these models and cultivating future extension possibilities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Stabilizing Controllers for High-dimensional Unknown Systems and Networked Dynamical Systems</title>
<link href="https://hdl.handle.net/1721.1/153783" rel="alternate"/>
<author>
<name>Zhang, Songyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/153783</id>
<updated>2024-03-16T04:05:50Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Learning Stabilizing Controllers for High-dimensional Unknown Systems and Networked Dynamical Systems
Zhang, Songyuan
Designing stabilizing controllers is a fundamental challenge in autonomous systems, particularly for high-dimensional, nonlinear systems that cannot be accurately modeled using differential equations because of the scalability and model transparency, and large-scale networked dynamical systems because of scalability and generalizability. To address the challenge, we develop (1) A Lyapunov-based guided exploration framework to learn stabilizing controllers for high-dimensional unknown systems; (2) A compositional neural certificate based on ISS (Input-to-State Stability) Lyapunov functions for finding decentralized stabilizing controllers in large-scale networked dynamical systems. Comprehensive experiments have shown that the proposed methods outperform the prior work in the case of stability, especially in high-dimensional unknown systems and large-scale networked systems.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>On the Potential Impact of Curved Meshing for Higher-order Adaptive Mesh Simulations</title>
<link href="https://hdl.handle.net/1721.1/153782" rel="alternate"/>
<author>
<name>Womack, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/153782</id>
<updated>2024-03-16T03:32:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">On the Potential Impact of Curved Meshing for Higher-order Adaptive Mesh Simulations
Womack, Christopher
Higher order, adaptive finite element methods have demonstrated the ability to significantly reduce the human and computational cost of accurately approximating the solution to partial differential equations (PDEs). In this thesis, we consider the potential advantages of incorporating higher-order element shapes, i.e. curved meshes, into an adaptive process through the use of a mesh-based, geometric mapping. While previous work has considered the generation of curved meshes to account for geometry curvature, less research has attempted to curve meshes to control error in an adaptive process. This work considers adaptive finite element methods for the advection-diffusion PDE in both Cartesian and polar coordinate systems, with the polar coordinate transformation serving to demonstrate the potential benefits of incorporating curvature into an adaptive meshing process. Results are presented for both uniform and adaptive refinement, considering first a volume output problem, followed by a boundary output problem; analytic solutions to these canonical problems are derived and presented as well. The results of this investigation demonstrate that, for each polynomial order, discretization, and output functional tested, solving the advection-diffusion equation in a polar coordinate system achieves significantly higher levels of accuracy in computing output quantities of interest. These results also showcase the potential improvements which are possible with the use of an adaptive process which incorporates element curving to control error. Additionally, adjoint analysis performed in this work shows how the form the primal output functional affects the adjoint PDE and boundary conditions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multi-Agent Relative Pose Estimation with Ultra-Wideband Ranging</title>
<link href="https://hdl.handle.net/1721.1/153779" rel="alternate"/>
<author>
<name>Fishberg, Andrew</name>
</author>
<id>https://hdl.handle.net/1721.1/153779</id>
<updated>2024-03-16T03:01:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Multi-Agent Relative Pose Estimation with Ultra-Wideband Ranging
Fishberg, Andrew
Inter-agent relative localization is critical for any multi-robot system operating in the absence of external positioning infrastructure or prior environmental knowledge. Motivated by the applications of nuclear non-proliferation, radiological search, and radiological mapping, this thesis explores leveraging multiple ultra-wideband (UWB) ranging sensors to produce frequent inter-agent pose estimates with minimal communication overhead. This work is intended as a component of a larger multi-agent simultaneous localization and mapping (SLAM) system (also known as collaborative SLAM or CSLAM), where persistent UWB-based inter-agent pose estimates provide a valuable alternative source of inter-agent loop closures. By collecting and analyzing real data, we develop improved sensor models, which in turn inform our algorithm design process– thus, this work produces competitive or improved results to state-of-the-art approaches with significantly less overall communication. By comparison, prior work typically supplements noisy UWB range measurements with additional continuously transmitted data, such as odometry, leading to potential scaling issues with increased team size and/or decreased communication network capability.&#13;
&#13;
This thesis’s main technical contributions are as follows: (1) Exploration of current commercially available off-the-shelf (COTS) UWB devices for use in mobile robotics. Byanalyzing real data, insights into commonly overlooked sensor quirks are addressed through our improved sensor models. (2) Development and testing of a novel 2D relative pose estimation system based on trilateration, leveraging multiple UWB ranging sensors per agent. (3) Extension of said system to 3D environments. (4) A list of recommendations and continuations for future work.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using source code to solve control problems</title>
<link href="https://hdl.handle.net/1721.1/153777" rel="alternate"/>
<author>
<name>Hernandez Cano, Leonardo</name>
</author>
<id>https://hdl.handle.net/1721.1/153777</id>
<updated>2024-03-16T04:01:08Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Using source code to solve control problems
Hernandez Cano, Leonardo
Planning for long-horizon tasks in environments with non-discrete state spaces and dynamics with discontinuities remains a core challenge in robotics. In this setting, fully automatic search methods do not yet scale to many real-world problems of interest, and because of this, specialized planning algorithms (e.g., hierarchical planners) have been developed that leverage domain knowledge to organize the search for a successful plan. However, these specialized algorithms rely on representations tailored to specific problems and domains, which imposes additional workload. Recent work, however, has studied scalable techniques for finding concrete control inputs using a given control specification alone in the form of a logical formula, which reduces the burden on the user.&#13;
&#13;
This thesis studies the application of program analysis techniques to the aforementioned planning problem, in conjunction with local formulae and hybrid search spaces in the style of hierarchical planners. Our observation is that the high-level structure of problem domains can often be coded into domain-specific simulators that model the high-level dynamics of the domain. This presents an opportunity to reuse that structure when describing the planning domain. We argue, this decreases the effort required to implement a planning system when a domain expert can relate domain knowledge to simulator source code. Thus, we design a planning system which can leverage simulator source code when describing a planning domain.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information Retrieval with Dense and Sparse Representations</title>
<link href="https://hdl.handle.net/1721.1/153774" rel="alternate"/>
<author>
<name>Chuang, Yung-Sung</name>
</author>
<id>https://hdl.handle.net/1721.1/153774</id>
<updated>2024-03-16T03:33:33Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Information Retrieval with Dense and Sparse Representations
Chuang, Yung-Sung
Information retrieval, at the core of numerous applications such as search engines and open-domain question-answering systems, relies on effective textual representation and semantic matching. However, current approaches can lose nuanced lexical detail information due to an information bottleneck in dense retrieval, or rely on exact lexical matching and thus overlook the broader contextual relevance when using sparse retrieval. This thesis delves into improving both dense and sparse retrieval systems with advanced language models and training strategies. We first introduce DiffCSE, a difference-based contrastive learning framework for unsupervised sentence embedding and dense retrieval that can effectively capture minor differences in sentences, showcasing improved performance in semantic tasks and retrieval for open-domain question answering. We then address sparse retrieval's limitations by developing a query expansion and reranking procedure. Using pre-trained language models, we propose an expansion and reranking pipeline for better query expansion, achieving superior retrieval results both in-domain and out-of-domain, yet retaining sparse retrieval's computational efficiency. In summary, this thesis provides a comprehensive exploration of advancing information retrieval in the generation of large language models.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Unified Framework for Characterization of Mode and Spike Routes to Rotating Stall</title>
<link href="https://hdl.handle.net/1721.1/153771" rel="alternate"/>
<author>
<name>Logrono, Marcos A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153771</id>
<updated>2024-03-16T03:17:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Unified Framework for Characterization of Mode and Spike Routes to Rotating Stall
Logrono, Marcos A.
In this thesis, we characterize modal and spike-type rotating stall inception for an isolated rotor using a low order, non-linear actuator disk model. The actuator disk representation is capable of capturing stall inception behavior given an axisymmetric total-to-static pressure rise characteristic. A parametric study of the effect of the derivative of the total-to-static pressure rise with respect to flow coefficient has been carried out to (i) define the links between the computed behavior of circumferentially propagating flow disturbances and those of established linearized analyses and (ii) describe both modes and spikes as different regimes of the same dynamical framework.&#13;
&#13;
The results of the parametric study show three distinct regimes for the non-dimensional compressor characteristics examined. For total-to-static pressure rise characteristic slopes below 0.2, exponentially growing sinusoidal disturbances lead to the onset of rotating stall with growth time scales on the order of ten rotor revolutions. This behavior is characteristic of what is known as modal inception, or modes. For pressure rise slopes above 0.4, disturbances with no sinusoidal structures and with magnitudes of order of the mean axial flow were observed before the onset of rotating stall. The growth time scales of these disturbances were on the order of a rotor revolution. This behavior is characteristic of spikes. For pressure rise slopes between 0.2 and 0.4, both behaviors were observed. These results suggest a continuous transition between modal and spike inception, contrary to the description as two distinct phenomena.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Practical Diagnostic Tools for Deep Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/153769" rel="alternate"/>
<author>
<name>Casper, Stephen</name>
</author>
<id>https://hdl.handle.net/1721.1/153769</id>
<updated>2024-03-16T03:21:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Practical Diagnostic Tools for Deep Neural Networks
Casper, Stephen
The most common way to evaluate AI systems is by analyzing their performance on a test set. However, test sets can fail to identify some problems (such as out-of-distribution failures) and can actively reinforce others (such as dataset biases). Identifying problems like these requires techniques that are not simply based on passing a dataset through a black-box model. In practice, this challenge lies at the confluence of two fields: interpreting and attacking deep neural networks. Both of these goals help to improve oversight of AI. However, existing techniques are often not competitive for practical debugging in real-world applications. This thesis is dedicated to identifying and addressing gaps between research and practice.&#13;
&#13;
I focus on evaluating diagnostic tools based on how useful they are for identifying problems with networks under realistic assumptions. Specifically, this thesis introduces a benchmark for these tools based on their usefulness for identifying trojans– specific bugs that are deliberately implanted into networks. I present the following thesis: &#13;
&#13;
1. Trojan discovery is a practical benchmarking task for diagnostic tools that can be applied to both dataset-based and dataset-free techniques. &#13;
2. State-of-the-art feature attribution methods often perform poorly relative to an edge detector at discovering trojans even under permissive conditions with access to data containing trojan triggers. &#13;
3. Feature synthesis methods– particularly ones that leverage the latent representations of models– can be more effectively used for diagnostics in dataset-free contexts.&#13;
&#13;
Chapter 1 adopts an engineer’s perspective on techniques for studying AI systems. It overviews motivations for building a versatile toolbox of model-diagnostic tools. These hinge on their unique ability to help humans understand models without being limited to some readily accessible dataset.&#13;
&#13;
Chapter 2 overviews literature on interpretable AI, adversarial attacks, feature attribution, feature synthesis methods, and evaluation methods for these tools. It also reviews connections between research on interpretability tools, adversarial examples, continual learning, modularity, network compression, and biological brains.&#13;
&#13;
Chapter 3 presents a benchmark for diagnostic tools that is based on helping humans discover trojans. This can be done either (a) under permissive assumptions by allowing access to data that include the trojan triggers or (b) under stringent assumptions where no such access is available.&#13;
&#13;
Chapter 4 demonstrates the difficulty of this benchmark with a preliminary evaluation of 16 state-of-the-art feature attribution tools. This reveals two shortcomings of them. First, because they can only explain model decisions on specific examples, these tools are not equipped to help diagnose bugs without data that trigger them. Second, even under idealized conditions where examples containing a trojan trigger are available, most feature attribution methods consistently fail to identify them better than an edge detector.&#13;
&#13;
Chapter 5 focuses on dataset-free feature synthesis methods. It introduces two novel techniques for studying networks with feature-level adversarial attacks. Both use model latents to produce interpretable adversarial attacks. Compared to other state-of-the-art feature-synthesis tools, these techniques are the most useful for trojan-discovery. However, there remains room for improvement on this benchmark. No techniques help humans identify trojans in more than 50% of 8-option multiple choice questions.&#13;
&#13;
Finally, Chapter 6, analyzes gaps between research and practical applications. It argues that a lack of clear and consistent criteria for assessing the real-world competitiveness of techniques has hampered progress. I conclude by discussing directions for future work emphasizing benchmarking, interdisciplinarity, and building a dynamic AI interpretability toolbox.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data-driven Analysis of Clinical Trials</title>
<link href="https://hdl.handle.net/1721.1/153768" rel="alternate"/>
<author>
<name>Cho, Joonhyuk</name>
</author>
<id>https://hdl.handle.net/1721.1/153768</id>
<updated>2024-03-16T03:06:00Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Data-driven Analysis of Clinical Trials
Cho, Joonhyuk
The research combines two studies in the field of clinical trials. The first evaluates the amyotrophic lateral sclerosis (ALS) drug AMX0035 using Bayesian decision analysis (BDA), balancing FDA safety standards with patient needs. This method provides a quantitative way to consider both the patient’s perspective and the disease’s impact. The second study uses machine learning models to predict how long clinical trials will take. By analyzing a large dataset, it identifies factors that affect trial duration, helping to streamline the trial process and potentially reduce costs. Together, these studies offer new ways to evaluate and manage clinical trials, combining patient-focused evaluation with efficient trial design.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wireless Scheduling for Monitoring Remote Correlated Sources</title>
<link href="https://hdl.handle.net/1721.1/153767" rel="alternate"/>
<author>
<name>Ramakanth, Rudrapatna Vallabh</name>
</author>
<id>https://hdl.handle.net/1721.1/153767</id>
<updated>2024-03-16T03:16:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Wireless Scheduling for Monitoring Remote Correlated Sources
Ramakanth, Rudrapatna Vallabh
We study the design of scheduling policies to minimize monitoring error for a collection of correlated sources, where only one source can be observed at any given time. We model correlated sources as a discrete-time Wiener process, and later as a Linear Time-Invariant process, where the increments are multivariate normal random variables, with a general covariance matrix that captures the correlation structure between the sources. Under a Kalman filter-based optimal estimation framework, we show that the performance of all scheduling policies oblivious to instantaneous error can be lower and upper bounded by the weighted sum of Age of Information (AoI) across the sources for appropriately chosen weights. We use this insight to design scheduling policies that are only a constant factor away from optimality and make the rather surprising observation that AoI-based scheduling that ignores correlation is sufficient to obtain performance guarantees. We also derive scaling results that show that the optimal error scales roughly as the square of the dimensionality of the system, even in the presence of correlation. We extend these findings to processes with looser constraints. Finally, we provide simulation results to verify our claims.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI-assisted reaction impurity prediction and inverse structure elucidation</title>
<link href="https://hdl.handle.net/1721.1/153765" rel="alternate"/>
<author>
<name>Mohapatra, Somesh</name>
</author>
<id>https://hdl.handle.net/1721.1/153765</id>
<updated>2024-03-16T03:48:21Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">AI-assisted reaction impurity prediction and inverse structure elucidation
Mohapatra, Somesh
Identification and control of impurities play a critical role in chemical process development for drug substance synthesis. Most chemical reactions result in a number of by-products and side-products, along with the intended major product. While chemists can predict many of the main process impurities, it remains a challenge to enumerate the possible minor impurities and even more of a challenge to track and propagate impurities derived from raw materials or from step to step. Further, in the absence of a systematic means for listing out possible-low-level impurities and performing impurity propagation, inverse structure elucidation – that is, identifying unknown impurities post hoc from analytical data, such as mass spectrometry data – presents a significant challenge.&#13;
&#13;
In this work, impurity prediction was established by developing an AI-based reaction predictor that takes as input the main reactants, and reagents, solvents, and impurities in these materials. Further, the predictor was run iteratively to track impurity propagation in multi-step reactions. For inverse structure elucidation, a chemistry-informed language model was developed to translate mass spectrometry data to potential molecular structures, which can then be checked for matches against the predicted chemical reaction products. The impurity prediction tool was applied to synthesis of common small molecule drugs—paracetamol and ibuprofen, and the inverse structure elucidation tool was used for the identification of chemical structures from publicly available electrospray ionization mass spectrometry data, The models were applied to proprietary Amgen programs, both small molecule drugs and biologics, with significant results noted in both projects.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Determinants of Voluntary Carbon Emissions Targets</title>
<link href="https://hdl.handle.net/1721.1/153742" rel="alternate"/>
<author>
<name>Downing, Charles</name>
</author>
<id>https://hdl.handle.net/1721.1/153742</id>
<updated>2024-03-14T03:00:47Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">The Determinants of Voluntary Carbon Emissions Targets
Downing, Charles
This study seeks to examine whether and how firms near important emissions thresholds change their behavior to meet these targets. Emissions targets are commonly measured in two ways: absolute emissions levels and emissions intensity(absolute levels normalized by sales). To meet absolute benchmarks, firms can only reduce their actual emissions. However, to meet intensity-based benchmarks, firms can either lower their emissions or raise revenue to meet their goal. This study will characterize the differences between firms who choose these two measurements, and investigate whether and when firms shift their emissions or reporting behavior to meet their emissions targets. Furthermore, this study will characterize the capital market consequences of meeting or missing emissions targets, consider potential market-based benchmarks in addition to targets set by the firms, and test cross-sectionally when firms have stronger incentives or ability to react to these targets.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elsewhere in New York City: Seeking Opportunities for Office Conversion</title>
<link href="https://hdl.handle.net/1721.1/153741" rel="alternate"/>
<author>
<name>Hong, Nayeon</name>
</author>
<id>https://hdl.handle.net/1721.1/153741</id>
<updated>2024-03-14T03:06:07Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Elsewhere in New York City: Seeking Opportunities for Office Conversion
Hong, Nayeon
Office-to-residential conversions have emerged as the most promising solution to New York City’s housing crisis. Previously unprecedented in the country’s largest office market, this trend evolved in response to the impact of the pandemic on offices, further reinforced by the surplus of underutilized office spaces in major districts like Manhattan. However, this phenomenon is not exclusive to Manhattan; it extends across the entire city. Boroughs outside Manhattan, such as Brooklyn, may offer untapped potential for such conversions, benefiting from more favorable conditions like lower property costs, varied zoning regulations, and diverse community needs. Broadening the scope of these conversion projects to include other boroughs could lead to a more equitable distribution of housing resources, address the city-wide housing shortage more effectively, and stimulate balanced economic growth and community development across New York City’s diverse landscape. This thesis delves into the opportunities and challenges of office-to-residential conversions and conducts a comparative case study of two properties, comparable in physical condition, one in Manhattan and the other in Brooklyn. This study aims to explore how geographic differences within New York City might impact the feasibility of the conversion.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Solar Roof Monetization in US Industrial Real Estate</title>
<link href="https://hdl.handle.net/1721.1/153740" rel="alternate"/>
<author>
<name>Xu, Ben</name>
</author>
<id>https://hdl.handle.net/1721.1/153740</id>
<updated>2024-03-14T03:18:10Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Solar Roof Monetization in US Industrial Real Estate
Xu, Ben
The transition towards clean energy in the US has placed the industrial real estate sector at the forefront of solar energy adoption, to leverage the underused extensive roof for solar power generation. This thesis scrutinizes the process of solar roof monetization, assessing the interplay between market dynamics, policy frameworks, and the financial implications of various solar roof business models within industrial real estate sector.&#13;
&#13;
Through a mixed-methods approach, including structured interviews with industry stakeholders and an extensive review of public databases and industry research reports, the research delineates the nuanced dynamics of the industrial solar market, marked by state-dependent variability and diverse regulatory environments, and business model for deployment. The study critically assesses two predominant business models – self-ownership and roof leasing, exploring their operating structure and implications to real estate owners.&#13;
&#13;
Utilizing a model grounded in real-world industrial underwriting, the thesis extends to a detailed financial analysis of the two solar roof business models integrating federal- and state-level policy incentives, signatured with tax credits, accelerated depreciation and renewable energy certificates. A critical examination of operating metrics – production efficiency, capital expenditures, financing costs, and revenue projections – also reveals their pivotal impact on investment returns.&#13;
&#13;
The thesis concludes with practical implications for industry stakeholders, providing a comprehensive guide to executing solar roof projects that not only align with corporate sustainability targets but also enhance financial and property values. This paper serves as a roadmap for industrial real estate owners seeking to capitalize on the transition to a cleaner energy grid while reinforcing their market position in an evolving landscape shaped by environmental imperatives and economic opportunities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Feasibility Study of a Tension-Leg Platform for Hydro-Powered Turbines System and Metocean Data Analysis for Floating Wind Turbine Design</title>
<link href="https://hdl.handle.net/1721.1/153735" rel="alternate"/>
<author>
<name>Alus, Avri</name>
</author>
<id>https://hdl.handle.net/1721.1/153735</id>
<updated>2024-03-14T04:04:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Feasibility Study of a Tension-Leg Platform for Hydro-Powered Turbines System and Metocean Data Analysis for Floating Wind Turbine Design
Alus, Avri
Marine and wind energy stand as promising frontiers for clean and sustainable power generation. The first chapter of this study explores the feasibility of implementing a Tension Leg Platform (TLP) for a hydropower turbine system with an overall rated power of 1500 kW.  &#13;
&#13;
The TLP semi-submersible concept for harnessing ocean energy is an innovative approach, which allows the employment of turbines in deep waters near the water surface. The TLP's structural and tendon parameters are examined through simplified static and dynamic analyses, ensuring its stability under extreme conditions. Furthermore, a power yield analysis is demonstrated, utilizing hindcast datasets of the Gulf Stream, to meticulously pinpoint the most suitable site. This selection process takes into careful consideration factors such as current velocities, water depth, and proximity to the shoreline.&#13;
&#13;
In the second chapter, we embark on a thorough preliminary analysis of metocean data, focusing on a potential site for wind turbine deployment. This analysis relies heavily on statistical examination, employing historical buoy data as well as high-resolution hindcasts for rigorous data validation. The findings illuminate the frequent occurrence of adverse weather conditions, marked by the prevalence of high and severe conditions, intermittently punctuated by storms.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Estimation of Stochastic Parameters: A GLS Approach</title>
<link href="https://hdl.handle.net/1721.1/153734" rel="alternate"/>
<author>
<name>Huo, Da</name>
</author>
<id>https://hdl.handle.net/1721.1/153734</id>
<updated>2024-03-14T03:06:42Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Efficient Estimation of Stochastic Parameters: A GLS Approach
Huo, Da
This thesis presents a novel rolling GLS-based model to improve the precision of time-varying parameter estimates in dynamic linear models. Through rigorous simulations, the rolling GLS model exhibits enhanced accuracy in scenarios with smaller sample sizes and maintains its efficacy when the normality assumption is relaxed, distinguishing it from traditional models like Kalman Filters. Furthermore, the thesis expands on the model to tackle more complex stochastic structures and validates its effectiveness through practical applications to real-world financial data, like inflation risk premium estimations. The research culminates in offering a robust tool for financial econometrics, enhancing the reliability of financial analyses and predictions.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Development of a Mobile Motion Capture&#13;
Suite for Advancing Technology Adoption</title>
<link href="https://hdl.handle.net/1721.1/153732" rel="alternate"/>
<author>
<name>Abdo, Hadeel</name>
</author>
<id>https://hdl.handle.net/1721.1/153732</id>
<updated>2024-03-14T03:06:49Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Design and Development of a Mobile Motion Capture&#13;
Suite for Advancing Technology Adoption
Abdo, Hadeel
Motion Capture (MoCap) technology has revolutionized several industries, including filmmaking, manufacturing, sports, and healthcare. Yet, the high cost and complexity of existing precise MoCap systems can make them inaccessible to many people. In addressing this accessibility problem, the Lab-in-a-Box (LabX) project was initiated within MIT’s Center for Clinical and Translational Research (CCTR) to develop a portable, accurate, user-friendly, and inclusive MoCap system to be used in healthcare applications and beyond.&#13;
&#13;
This thesis explores the initial stages of developing the LabX system, including extensive market research and user interviews, user-centric hardware design, software development, and camera integration and sensor fusion. Decisions such as Raspberry Pi camera selection and ROS2 utilization for system integration are made to ensure optimal performance. Structural tests are conducted to ensure durability and adaptability to diverse environmental conditions and natural vibrations. This stage of the LabX project lays the foundation for creating accessible markerless tracking and less-invasive radar motion capture systems in the future. The current design of LabX enables quick customization, creating a robust foundation for broader applications in physical therapy education, in-home remote sensing, and other use cases.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Lab to Life: Bridging Gaps in Motion Capture to&#13;
Increase Public Usability through Integrated Hardware&#13;
and Software Solutions</title>
<link href="https://hdl.handle.net/1721.1/153731" rel="alternate"/>
<author>
<name>Lonni, Pierre</name>
</author>
<id>https://hdl.handle.net/1721.1/153731</id>
<updated>2024-03-14T03:49:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">From Lab to Life: Bridging Gaps in Motion Capture to&#13;
Increase Public Usability through Integrated Hardware&#13;
and Software Solutions
Lonni, Pierre
This Master’s thesis delves into the initial stages of the Lab-In-A-Box (LabX) project, an initiative within MIT’s Center for Clinical and Translational Research (CCTR). LabX is dedicated to simplifying the incorporation of Motion Capture (MoCap) technology into home environments. The project’s primary aim is to create portable and accurate MoCap systems, utilizing less intrusive technology (such as RADAR signals instead of traditional IR or visible light) for capturing motion of individuals in their everyday lives. This approach seeks to revolutionize MoCap’s applicability, making it more accessible and user-friendly for public use.&#13;
&#13;
The central focus of this research is the development of a portable and stable sensor rig, which is crucial to LabX’s mission. Designed for precise data capture, the rig emphasizes ease of deployment and versatility, ensuring that it can be effectively used in various settings outside of specialized laboratories.&#13;
&#13;
In addressing the challenges presented by traditional MoCap systems, the thesis details the hardware development process, focusing around the creation of the project’s sensor rig, and incorporating sensor fusion technology. This enhancement allows simultaneous data capture at different locations, emphasizing stability and portability for versatile application in various public settings.&#13;
&#13;
The thesis extends its focus to LabX’s overarching goal of enhancing MoCap’s public accessibility through integrated hardware and software solutions. A holistic approach is emphasized, encompassing sensor fusion and machine learning components. This integration aims to bridge gaps in traditional setups and render MoCap technology more inclusive and widely applicable.&#13;
&#13;
This research significantly contributes to advancing user-friendly MoCap technology, signifying a transition from controlled laboratory environments to real-world applications. The incorporation of hardware, sensor fusion, and machine learning solutions in LabX establishes a foundation for future advancements, ultimately enriching public interaction with motion capture and seamlessly integrating it into everyday life.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development of Process Control Framework&#13;
Incorporating Deep Reinforcement Learning for Desktop&#13;
Fiber Extrusion Device via PLC Implementation</title>
<link href="https://hdl.handle.net/1721.1/153730" rel="alternate"/>
<author>
<name>Zhang, Yutong</name>
</author>
<id>https://hdl.handle.net/1721.1/153730</id>
<updated>2024-03-14T03:02:24Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Development of Process Control Framework&#13;
Incorporating Deep Reinforcement Learning for Desktop&#13;
Fiber Extrusion Device via PLC Implementation
Zhang, Yutong
Optical fiber has revolutionized communication, and the market has experienced rapid growth in the last ten years. It can transmit information at high speeds with minimal loss over long distances due to its structure. Fiber extrusion, a common manufacturing method in the industry, involves controlling the fiber diameter during its formation. In this thesis, a control framework for a desk-top fiber extrusion device is developed, incorporating Deep Reinforcement Learning. By improving the mechanical design of the desk-top fiber extrusion device and implementing PID controllers over the system on the Allen-Bradley PLC, the coefficient of variation in the fiber extrusion process is reduced to 0.1. A communication path is established based on open platform communication unified architecture (OPC UA), enabling the external devices to access the data in the PLC. Using a Deep Reinforcement Learning model on a separate PC, the process is controlled to have a coefficient of variation of 0.13, with the potential to reduce the response time.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Microscale Analysis of Millimeter-wave Induced Vitrified Basalt for Use in Enhanced Geothermal Energy Systems</title>
<link href="https://hdl.handle.net/1721.1/153727" rel="alternate"/>
<author>
<name>Meltzer, Eve</name>
</author>
<id>https://hdl.handle.net/1721.1/153727</id>
<updated>2024-03-14T03:53:58Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Microscale Analysis of Millimeter-wave Induced Vitrified Basalt for Use in Enhanced Geothermal Energy Systems
Meltzer, Eve
Extraction of the energy available from geothermal heating in the Earth could provide substantial contributions to energy needs long-term. However, there are major technical limitations with the current geothermal drilling process. A new technology in the field of EGS that uses a millimeter (MM) wave gyrotron, which allows for quicker, more efficient drilling could be a potential solution to these limitations. The MM-wave drilling process, a technique developed by Dr. Paul Woskov of the MIT Plasma Lab, has two significant advantages as compared to traditional drilling: 1. The well hole advance is through melting of the rock, which is faster than mechanical drilling. 2. The molten rock then solidifies, creating a vitrified wall support without the need for extra casings. This drilling and casing process can potentially save money, time, and material. The study presented in this thesis is aimed at understanding the strength and microscale mechanical and chemical properties of the vitrified material to see what is happening to the rock, specifically Basalt, pre- and post-melting by using a series of experimental and analytical tools. These include:  Scanning Electron Microscopy (SEM), Electron Dispersive Spectroscopy (EDS), Nano-Indentation, Raman Spectroscopy, and optical imagery. &#13;
&#13;
The results presented in this thesis show the creation of a non-crystalline amorphous solid that has relatively high strength values with slight evidence of micro-cracking. There are significant elemental differences between the basalt matrix, transition zone matrix, and solidified melt in addition to changes in the molecular phases. The partial melting of basalt minerals throughout the transition zone was also recorded. Ultimately, due to micro-cracking and the variability in the transition zone's chemical make-up, there may be significant risks to using this material as a well-bore casing as it is now. However, these results open up the possibly of future research in the field of environmental sustainability for alternative uses of this new vitrified material.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Applying System Dynamics to Simulate and Forecast Rental Real Estate Market</title>
<link href="https://hdl.handle.net/1721.1/153726" rel="alternate"/>
<author>
<name>Chauhan, Rohit Singh</name>
</author>
<id>https://hdl.handle.net/1721.1/153726</id>
<updated>2024-03-14T03:02:04Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Applying System Dynamics to Simulate and Forecast Rental Real Estate Market
Chauhan, Rohit Singh
This research explores the utilization of system dynamics modeling methodology to simulate and forecast a sub-market within the real estate industry. By doing so, this research examines the feasibility and potential of a system dynamics-based tool that could reliably forecast future trends and inform decision-making for businesses in a sub-market. It is based on the original system dynamics model for real estate markets as developed by John Sterman (Sterman, Case Study: Boom and Bust in Real Estate Markets 2000), and other subsequent examples of this methodology’s application in a real estate context since. It expands on this existing literature by recognizing and incorporating concepts central to the real estate industry, such as rental rates, affordability, absorption, inflation, cap rates, and rental prices, as key for predicting market movements.&#13;
&#13;
As a test bed, the multifamily rental housing in the South Boston region is identified for application. The study thus predicts short-term movement for the multifamily assets in this sub-market in comparison to forecasts from other major sources. It also highlights the limitations of this approach, such as the smoothing effect of generated data and its limitations in capturing seasonality in the market. The study further explores potential avenues for enhancing the functionality and accuracy of forecasts by endogenizing additional factors, thus establishing a foundation for subsequent research.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Navigating the Storms of Distressed Ventures: South Korean Investments in US Office Real Estate</title>
<link href="https://hdl.handle.net/1721.1/153724" rel="alternate"/>
<author>
<name>Lee, David Sang Hyup</name>
</author>
<id>https://hdl.handle.net/1721.1/153724</id>
<updated>2024-03-14T03:00:38Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Navigating the Storms of Distressed Ventures: South Korean Investments in US Office Real Estate
Lee, David Sang Hyup
During the early to mid-2010s, Korean investors flooded into the US office real estate market, enticed by the promise of higher returns in an era of low interest rates. At this time, the Korean base interest rate exceeded the Fed funds rate, minimizing losses from currency hedging. The allure of investment was further magnified by the "herding effect" – a phenomenon driven by headlines of Korean institutions achieving success in the US office market. Fear of missing out (FOMO) and pressure from executives propelled a wave of Korean investments into the same sector. Today, Korean investors face distress in this market. The aftermath of COVID-19 has led to a significant decline in demand for office space, with employees reluctant to return to physical offices. Furthermore, the distress extends beyond demand dynamics; it encompasses financial turmoil caused by the Federal Reserve's rapid interest rate hikes. These hikes have created a double-edged sword, adversely impacting both equity investors struggling to meet loan obligations and lenders unable to recoup their loans.  This thesis explores potential solutions through real-life case studies, drawing from the author's experience working at a number of real estate private equity firms. The path to resolution, though, is fraught with challenges, including but not limited to: information asymmetry, moral hazards, a lack of experience in US office market distress, complex investment committee approval procedures, and the entanglement of numerous investors in single deals. This thesis sheds light on these complexities while offering insights into navigating the distressed landscape of US office real estate investments for Korean investors.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unveiling the Dynamics of Inflation in Housing Rent</title>
<link href="https://hdl.handle.net/1721.1/153723" rel="alternate"/>
<author>
<name>Flores Jimenez, Julio E.</name>
</author>
<id>https://hdl.handle.net/1721.1/153723</id>
<updated>2024-03-14T03:04:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Unveiling the Dynamics of Inflation in Housing Rent
Flores Jimenez, Julio E.
Inflation is one of today’s biggest short-term global economic challenges, and housing costs, a persistent component of inflation whose price increases have had a strong influence in the loss of purchasing power of American households, and which is irreplaceable, have more than doubled in the past 20 years. Housing cost rises have outpaced inflation for the rest of the products typically consumed by individuals, and low-income earners have been highly burdened by the situation. However, this has not always been the tendency, and this paper will explain how the recent rise in rents can be mainly attributed to a higher demand for housing, as opposed to higher construction and operating costs due to inflation spillovers into real estate related products. This will be demonstrated through both qualitative and quantitative analyses of the housing market and its price dynamics in the United States.  The first section of this document — The Upheaval of Housing Costs — will explain how rising house prices have trespassed into rising residential rents, and how this has been highly influenced by long periods of expansionary monetary policy and the implementation of Quantitative Easing, along with rising income inequality and the failure of the market to swiftly adapt its residential products to the changing dynamics in demand. This chapter offers a well-rounded explanation of the demand determinants of housing, as well as historical context to better understand why rents have outpaced inflation for other products since the 1980’s.  The second section — Rents, House Prices, and Inflation —exhibits a quantitative analysis of how house prices and inflation for non-rent products impact residential rents. This analysis was carried out with an Error Correction Model to capture both the short-term and long dynamics of these variables, given that changes in house prices and inflation do not fully impact rents immediately. This model was run for the United States and replicated for Boston, Chicago, Dallas, Detroit, Houston, Los Angeles, Miami, New York, Philadelphia, and San Francisco. Results for this analysis show that since 1978, demand-pull inflation has dominated rent growth in the United States and in most of the studied cities. This analysis is followed by an Appendix showcasing the detailed outputs for every model, as well as graphs to visually support our quantitative analysis and provide comprehensive evidence of the dynamics of these variables in those cities.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Performance of Real Estate Investment Strategy Across Multiple Cycles: A Comparison of Core and Non-Core Strategies Based on A New Dataset and Industry Interviews</title>
<link href="https://hdl.handle.net/1721.1/153722" rel="alternate"/>
<author>
<name>Ding, Yizhuo (Wilson)</name>
</author>
<id>https://hdl.handle.net/1721.1/153722</id>
<updated>2024-03-14T03:38:20Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">The Performance of Real Estate Investment Strategy Across Multiple Cycles: A Comparison of Core and Non-Core Strategies Based on A New Dataset and Industry Interviews
Ding, Yizhuo (Wilson)
In the wake of the COVID-19 pandemic, the real estate capital markets have been thrust into a realm of heightened uncertainty, primarily due to fluctuating federal funds rates and rapidly changing economic conditions. This thesis delves into the intricate dynamics of Core and NonCore private equity real estate strategies in response to these turbulent times. The research aims to dissect and understand the performance and strategic adjustments in real estate investment amidst changing capital market cycles, particularly in the post-pandemic landscape. Using a new source of data from the MSCI Property Index and NCREIF Research database, the study analyzes historic performance trends across strategies since 2000, identifying a strong correlation between market fundamentals and private real estate returns. The analysis highlights the superior performance of Development strategies in the Sunbelt and Southwest regions, contrasted with the decline of Rehabilitation/Repositioning strategies in West Coast markets, and it reflects a shift in office sector demand. The thesis also explores market expectations and strategic responses during the high-interest rate environment and secular market changes of the fourth quarter of 2023. Qualitative insights from 21 industry professionals point to a transition from falling values in 2023 to value recovery in 2024. The interviews also signal the short-term opportunities for Core/Core-plus strategies in the forthcoming lower-rate environment, as inflation eases. The thesis also underscores the importance of aligning investment strategies with thematic investment trends, as evidenced by the success of development strategies in certain regions. The thesis posits that while investment style affects return and volatility, the overarching drivers of long-term returns across strategies are thematic trends and the broader market environment, including access to capital, leverage opportunities, and secular shifts. The study advocates for a holistic approach to investment decisions, considering thematic trends and market dynamics beyond just the immediate return and volatility differences.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Housing in Massachusetts</title>
<link href="https://hdl.handle.net/1721.1/153717" rel="alternate"/>
<author>
<name>Nader, Andy</name>
</author>
<id>https://hdl.handle.net/1721.1/153717</id>
<updated>2024-03-14T03:39:44Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Housing in Massachusetts
Nader, Andy
Massachusetts is experiencing a housing crisis with the cost of housing increasing more rapidly than in any other comparable coastal state over the past 40 years. This growth in the cost of housing has far outpaced the growth in household income. This thesis explores state economics, the housing market in Massachusetts, and one piece of recent legislation, the MBTA Communities Act, designed to directly address the housing crisis. Over these past forty years cities and towns in Massachusetts have developed zoning codes that restrict the ability to add new housing to the existing stock. With such strong local control over land use, I argue that intervention is needed from the state to provide zoning relief and institute as of right high-density zoning. I will use the town of Milton as a case study to illustrate the adoption of the new legislation and theorize on the impact of unlocking new housing.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real Estate Redevelopment Framework: Quantitative Analysis of Adaptive Reuse Strategies</title>
<link href="https://hdl.handle.net/1721.1/153716" rel="alternate"/>
<author>
<name>Kittisorayut, Khanachai (Earn)</name>
</author>
<id>https://hdl.handle.net/1721.1/153716</id>
<updated>2024-03-14T03:28:40Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Real Estate Redevelopment Framework: Quantitative Analysis of Adaptive Reuse Strategies
Kittisorayut, Khanachai (Earn)
As urban landscapes continue to evolve, real estate developers face opportunities and challenges in redeveloping underutilized properties while maximizing their return on investment. This thesis explores the concept of adaptive reuse as a socially, environmentally, and economically viable strategy for real estate redevelopment. It provides a systematic and quantitative approach to identifying potential buildings, prioritizing areas for improvement, and assessing the financial feasibility of adaptive reuse projects.&#13;
&#13;
The study begins by exploring the fundamental concepts of adaptive reuse, encompassing cultural, urban, and environmental benefits that mutually contribute to economic value creation. A series of quantitative analyses then dissects the value drivers of adaptive reuse strategies. These analyses form a strategic toolkit, categorizing various strategies by investment phases from acquisition to disposition.&#13;
&#13;
Using Center Plaza in downtown Boston as a real-world case study, the thesis employs the Discounted Cash Flow (DCF) method to determine key financial metrics such as Net Present Value (NPV), Internal Rate of Return (IRR), Return on Cost (ROC), and Multiple on Invested Capital (MOIC). These metrics compare financial returns across different redevelopment scenarios—no improvement, adaptive reuse, and new construction. Further, the study employs volatility and cost-benefit analyses to gauge the impact on NPV and identify conditions under which redevelopment is viable. The comprehensive findings suggest that adaptive reuse can outperform complete redevelopment when conditions are favorable, requiring a minimum yield-on-cost for improvement averaging around 6.8%.&#13;
&#13;
Conclusively, the thesis provides a comprehensive framework for enhancing value and evaluating potential buildings for real estate redevelopment. It serves as a resource for real estate professionals, property owners, policymakers, and preservationists, advocating for the conservation and revitalization of our dynamic urban landscapes.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Examination of Airbnb Demand and Supply in India</title>
<link href="https://hdl.handle.net/1721.1/153715" rel="alternate"/>
<author>
<name>Chotangada, Gautham Somana</name>
</author>
<id>https://hdl.handle.net/1721.1/153715</id>
<updated>2024-03-14T04:09:14Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Examination of Airbnb Demand and Supply in India
Chotangada, Gautham Somana
This thesis finds Airbnb occupancy in India low and examines the reasons for it. In particular, it focuses on the curious case of year-round high occupancies of 85% + for RAHO properties, a hospitality company with accommodations listed on Airbnb in Coorg, South India. When comparing RAHO accommodation occupancies with the average Indian Airbnb occupancy of 36% and the average branded hotel chain occupancy of 66%, some questions become apparent. Is RAHO’s high occupancy systemic or idiosyncratic? What could be the reason underpinning the occupancy rate differences between Airbnb and branded hotel chains? This is a particularly relevant topic given the changes in the Indian economy. India is a rapidly developing country with an average year-on-year real GDP growth of 5.75% from 2013 to 2023. The GDP per capita has grown by 57% during the same period.  This economic development and increased disposable income have resulted in a larger, more powerful middle-income group that travels more often. As a result, the number of domestic traveler visits has doubled from 2013 to 2019. This increasing demand can be more easily met if accommodation supply comes from individual homeowners through online travel agencies (OTAs). The findings aim to inform strategies for improving the supply of suitable accommodations for this target group, particularly non-urban vacation destinations in India. This thesis hopes to provide a valuable resource for entrepreneurs in the space to build sustainable businesses by highlighting the primary reasons for higher occupancies and suggesting approaches for higher occupancy.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Choice Modeling and Assortment Optimization on the Transformer Model</title>
<link href="https://hdl.handle.net/1721.1/153714" rel="alternate"/>
<author>
<name>Jiang, Qingxuan</name>
</author>
<id>https://hdl.handle.net/1721.1/153714</id>
<updated>2024-03-14T03:05:01Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Choice Modeling and Assortment Optimization on the Transformer Model
Jiang, Qingxuan
The problem of modeling customer choices and finding assortments with maximal revenue has been widely studied in revenue management. Random utility models (RUMs) are typically used to model choice. These models implicitly enforce a rational decision making process whereby a customer is endowed with utilities for each product in the assortment and picks the product that maximizes her utility. This work seeks to explore a general class of choice models where the customer’s decision making process is not constrained in this fashion.&#13;
&#13;
To allow for departures from rational choice (and RUMs), we posit that the customer indirect utility associated with a product is a function of the assortment offered to her. Motivated by the success of transformer models in deep learning, we investigate the case where this utility function is defined through a trained transformer network. This leads to a new class of neural network-based discrete choice models, which we call transformer choice models. The universal approximation property of the transformer network ensures that our model can approximate any discrete choice model, and thus it can capture irrationalities in choice behavior. &#13;
&#13;
We perform computational experiments with real data to verify the generalization performance of our transformer choice model to unseen assortments. To ensure that our model does not overfit on the training data, we use dropout as the regularization method during training. We compare our model to both traditional choice models (the multinomial logit model and its synergistic variant that considers cross-product interaction) and machine learning-based choice models (decision forest choice model and feedforward neural network choice model) on two datasets: a large grocery panel dataset and an online hotel search dataset. We show that on both datasets, the transformer choice model has generalized well to unseen assortments with proper regularization. Moreover, on the more complex dataset of online hotel search, the transformer choice model has outperformed all other models in terms of out-of sample error.&#13;
&#13;
We finally consider the assortment optimization problem on transformer choice models. While the general assortment optimization problem is complex and in-tractable, we empirically evaluate and compare several heuristic algorithms, including random search, quadratic approximation, and local search. Our experiments on transformer choice models with real prices show that a simple local search heuristic finds the global optimum for the assortment optimization problem in three-fourths of the data categories, while achieving a good approximation on the rest of the categories. This shows that in practice, local search can be a reasonable heuristic for assortment optimization on transformer choice models.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sensitivity of Precipitation to Land-use Changes in a Regional Climate Model of West Africa</title>
<link href="https://hdl.handle.net/1721.1/153713" rel="alternate"/>
<author>
<name>Ryser, Patric</name>
</author>
<id>https://hdl.handle.net/1721.1/153713</id>
<updated>2024-03-14T03:14:15Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Sensitivity of Precipitation to Land-use Changes in a Regional Climate Model of West Africa
Ryser, Patric
Limited water resources, climate change and food security needs in West Africa present a special set of challenges in the years to come as the population grows. An optimized irrigation scheme for agriculture can change regional climate by increasing rainfall in specific areas, possibly increasing the water availability for agricultural activities by causing changes in the background large-scale climate circulations which could lead to more precipitation overall in areas with water scarcity.  Both observational and model studies have looked at irrigation impacts around the world, including West Africa. However, the intermediate mechanisms, such as specific roles of the atmospheric structures of the Planetary Boundary Layer (PBL) and Lifting Condensation Level (LCL), or how background wind patterns are affected under certain land-use changes have not been thoroughly explored.  This thesis analyzes the atmospheric changes due to land-use and land-cover changes (LULCC) by analyzing the PBL, the LCL, surface wind, surface pressure and other atmospheric variables to quantify the underlying physical mechanisms which shape rainfall. We analyze this by using the MIT Regional Climate Model (MRCM) to test different LULCC scenarios. For the irrigation experiment, LCL is more sensitive and drops more than does PBL especially in the north, yet rainfall only increases south of the irrigation area. There also exists a transitional zone, north of which there is less rainfall. Desertification increases both the PBL and LCL heights, but the increase in LCL is greater. This pushes the cloud base higher than the PBL, preventing cloud formation and rainfall. However, the simulated rainfall changes do not mirror this development. At a certain latitude, there is again a transitional zone, north of which the rainfall decreases and south of which the rainfall increases intermittently. Given the patterns of the precipitation changes, we believe that different mechanisms are at work for both the desertification and irrigation experiments. This study hypothesizes a blocking mechanism that prevents the monsoon from travelling northward due to the presence of a high surface pressure anomaly being observed in the north of the irrigated zone under the irrigation scenario.&#13;
&#13;
The changes of the atmospheric structure, specifically the PBL and LCL, surface pressure, and wind patterns, as analyzed in this thesis, provide us with another dimension to understand the effects of irrigation and desertification on rainfall, enabling more optimal irrigation strategies. It also provides insights on the locations where natural vegetation or croplands may benefit from the additional rainfall, which could facilitate soil carbon sequestration, a nature-based solution for combatting climate change.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Top Retailers in the United States:  A Changing Landscape of Space Demand in a Post-COVID Era</title>
<link href="https://hdl.handle.net/1721.1/153711" rel="alternate"/>
<author>
<name>Sun, Yueqi</name>
</author>
<id>https://hdl.handle.net/1721.1/153711</id>
<updated>2024-03-14T04:02:15Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Top Retailers in the United States:  A Changing Landscape of Space Demand in a Post-COVID Era
Sun, Yueqi
The retail sector in the United States has been undergoing a paradigm shift, predominantly driven by digitalization and further accelerated by the disruptive forces of the pandemic. This study examines the dynamic space needs of top retailers in the U.S. within the context of the post-COVID era. The study employs both qualitative and quantitative analyses, including macro research and pairwise comparisons of key financials and physical space metrics of 83 listed U.S. retail companies during a time period from 2017 to 2022. The research reveals a significant shifting of revenues towards e-commerce, and a substantial correlation between revenues in different channels and the space need of physical stores and distribution facilities. By delving into the data and models, the thesis provides potential applications and insights into how the stakeholders within the industry could leverage the findings to plan and adapt in an evolving retail landscape.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Negotiating ROI (Return on Investment) for ROI (Return on Impact)  A Pre-Feasibility Study of Socio-Eco Resort Development in Eastern Indonesia</title>
<link href="https://hdl.handle.net/1721.1/153710" rel="alternate"/>
<author>
<name>Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/153710</id>
<updated>2024-03-14T03:23:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Negotiating ROI (Return on Investment) for ROI (Return on Impact)  A Pre-Feasibility Study of Socio-Eco Resort Development in Eastern Indonesia
Christopher
The thesis delves into the intriguing possibility of developing a nature-based resort in the pristine Raja Ampat region of Eastern Indonesia that simultaneously maximizes Return on Investment (ROI) and Return on Impact (ROI*). With the tourism industry in Raja Ampat growing at an impressive rate of 310% in just five years before the pandemic hit, the potential for a successful socio-eco resort is undeniable. However, the study recognizes the need to consider the growing demand for environmentally sustainable travel and the desire of travelers to positively impact the local economy. The research aims to determine the best partnership structure and agreement for the general partner (GP), limited partner (LP), and hotel management to achieve the desired alignment between ROI and ROI*. This requires analyzing the level of sacrifice necessary for impact and how to measure the impact on various stakeholders, including investors, community leaders, local communities, hotel management firms, and potential customers. Additionally, the thesis explores the metrics to use when measuring impact for each stakeholder and ultimately aims to align the interests of all parties involved. The study recognizes the critical need to create a socio-eco nature-based resort that not only delivers financial returns but also generates social and environmental benefits. The research provides a unique perspective on the importance of local economic growth in a less developed area in Indonesia. Ultimately, the thesis aims to identify a partnership structure that ensures the success of the proposed resort while creating a positive impact on the local economy and environment.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>To attract or to oscillate: Validating dynamics with behavior</title>
<link href="https://hdl.handle.net/1721.1/153709" rel="alternate"/>
<author>
<name>Murray, Keith T.</name>
</author>
<id>https://hdl.handle.net/1721.1/153709</id>
<updated>2024-03-14T03:35:11Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">To attract or to oscillate: Validating dynamics with behavior
Murray, Keith T.
In recent years, the `computation-through-dynamics' framework has gained traction within the neuroscience community as a means of describing how neurological processes implement behavioral computations. The framework argues that computations in neural systems are best explained through dynamical systems in which behaviorally-relevant variables are represented and manipulated via dynamical phenomena. While a variety of previous works have demonstrated the framework's productivity, there are a number of challenges surrounding its efficacy. In this thesis, we identify and address two challenges concerning the existence of multiple dynamical systems which perform the same computation.&#13;
&#13;
We show that a continuous-time recurrent neural network (CT-RNN) can implement two distinct dynamical systems, termed the ``attractive mechanism'' and the ``oscillatory mechanism'', to compute a novel modular arithmetic task inspired by the card game SET. The attractive mechanism computes modular arithmetic through traversing a lattice of fixed-point attractors. The oscillatory mechanism computes modular arithmetic through phase-shifts on a limit cycle. The existence of these two dynamical mechanisms raises two challenges for the `computation-through-dynamics' framework:&#13;
1. How can computationally similar, yet dynamically distinct systems be experimentally identified?&#13;
2. What criteria determine the implementation of one dynamical system versus another?&#13;
&#13;
We address these questions by advocating for the use of behavioral phenomena. Through two experiments, we show how our dynamical mechanisms produce distinct psychometric curves when classifying ambiguous stimuli and generalize to unseen stimuli at different rates when trained on partial datasets. We further argue how these behavioral phenomena can serve as ecological criteria in determining the implementation of a mechanism. These results underscore the utility of behavior in the `computation-through-dynamics' framework.&#13;
&#13;
We conclude this thesis by formulating levels of abstraction for the `computation-through-dynamics' framework, termed `levels of neural computation'. Levels of abstraction were critically important in establishing the efficacy of digital computation; therefore, we speculate that the `levels of neural computation' will further advance the efficacy of the framework. These levels argue for interpreting dynamical systems as implementations for more abstract `geometric representations and manipulations' that effectively serve as neural algorithms.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural implicit representations for engineering design</title>
<link href="https://hdl.handle.net/1721.1/153704" rel="alternate"/>
<author>
<name>Rebbagondla, Jaya Manideep</name>
</author>
<id>https://hdl.handle.net/1721.1/153704</id>
<updated>2024-03-14T03:02:36Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Neural implicit representations for engineering design
Rebbagondla, Jaya Manideep
A good design geometry parameterization is essential for mechanical design engineers to quickly modify the design features without the need to remodel everything from scratch. But, with the advent of better manufacturing methods, design geometries are becoming more and more complicated. Design parameterization is even more important in such case, as the remodeling of such complex design consumes significant time. Furthermore, such a parameterization can also aid in creative ideation of design engineers and decision processes at the management level. &#13;
&#13;
However, traditional design representation methods like (Brep, meshes etc.) face difficulty in representing designs with diverse topologies using the same number of parameters that are also limited in number. Implicit neural representations are gaining popularity in 3D geometry representations, because of their capabilities to represent diverse set of designs in a fixed length latent vector space. So, the goal of this thesis is to demonstrate the best implicit neural architecture for building latent space with design geometries that are diverse in their topologies and to demonstrate the methods in which the learned latent space can then be explored.  &#13;
&#13;
The effectiveness of this parameterization method is demonstrated by analyzing the reconstruction quality of the learned designs and regularization quality of the latent space, corresponding to an eight design dataset. Superiority of these results are demonstrated both qualitatively and quantitatively. Then, several latent space exploration tools are proposed to analyze the resultant latent space. Unique design geometry results are demonstrated for methods like latent space interpolation, principal component analysis and latent vector scaling. While the random sampling of latent space is shown to yield low quality results because of the sparsity of the latent space, the random sampling of the principal components of the latent space is shown to yield meaningful design geometries. Furthermore, a user interface for design space exploration is proposed wherein the user can explore the parameter space by just tuning the proportions of each of the dataset geometries. The possibility of training a surrogate models for mapping the latent space to metrics like maximum von Mises stress is also analyzed using a dataset of 25 designs. Finally, the required characteristics of the design parameterization are revisited to demonstrate that the proposed method satisfies the ideal characteristics of design parameterization.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Does Investors’ Belief on Other Investors’ Information Acquisition Affect Trading and Price?</title>
<link href="https://hdl.handle.net/1721.1/153702" rel="alternate"/>
<author>
<name>Wang, Yuting(Economist)</name>
</author>
<id>https://hdl.handle.net/1721.1/153702</id>
<updated>2026-02-03T16:43:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Does Investors’ Belief on Other Investors’ Information Acquisition Affect Trading and Price?
Wang, Yuting(Economist)
I study how investors’ belief on other investors’ information acquisition about an asset affects trading and price, holding constant investors’ actual information acquisition. I hypothesize that the predictions depend on the trading strategy investors adopt, which is essentially determined by the nature of the asset and the level of investor sophistication. In a world where investors are able to form high-quality independent estimates of the fundamental asset value, they extract other investors’ signals from the price change and end up trading more aggressively on their private signals when they believe there have been more information acquirers. In contrast, in a world where investors cannot form high-quality independent estimates of the asset value, they tend to adopt a heuristic strategy and trade less aggressively on their private signals when they believe there have been more information acquirers. Using comprehensive private meetings data in China from 2007 to 2017 and a mandate by the Shenzhen Stock Exchange in 2012 that requires firms to disclose the dates and participants of private meetings within two trading days, I find that investors on average trade less aggressively when they believe there have been more information acquirers, consistent with the heuristic world. The results are concentrated in firms with high information uncertainty, e.g., firms with high market-to-book and volatility, which approximate a world where investors are less likely to have a high-quality fundamental anchor, supporting my theoretical mechanisms.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Elucidation of Battery Electrolyte Coordination Sphere Thermodynamics via Calorimetric and Potentiometric Titrations</title>
<link href="https://hdl.handle.net/1721.1/153698" rel="alternate"/>
<author>
<name>Skiba, Dhyllan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153698</id>
<updated>2024-03-14T04:06:45Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Elucidation of Battery Electrolyte Coordination Sphere Thermodynamics via Calorimetric and Potentiometric Titrations
Skiba, Dhyllan A.
Rechargeable metal-anode batteries are a promising post Li-ion battery development. However, the high reactivity of metallic anodes with the electrolyte results in the formation of a solid-electrolyte interphase (SEI). Electrolyte design is a key handle in controlling the SEI composition in metal-anode batteries, but our understanding of the electrolyte—specifically the cation’s first coordination sphere—is limited. In this thesis, the study of ion solvation and complexation techniques are brought into the context of battery electrolytes. Relevant data from literature is summarized and supplemented with enthalpy of solution (ΔsolH) and enthalpy of transfer (ΔtrH) measurements for the Li-battery relevant salts, LiPF6 and LiTFSI, in a set of polar aprotic solvents. The trends observed are rationalized by consideration of solvent and anion properties, particularly the solvent donicity and anion size. To achieve a finer picture of the Li+ coordination sphere, isothermal titration calorimetry (ITC) and potentiometric titrations (PT) were employed with a set of exemplar electrolytes to probe the thermodynamic evolution of the Li+ coordination complex as weak solvent is displaced by a stronger solvent in the first coordination sphere. Raman spectroscopy is used to confirm that solvent displacement occurs as expected, and the effect of the anion on ITC measurements is investigated. A statistical binding model is developed which is fit to the experimental titration data to extract an average change in Gibbs free energy (ΔG), enthalpy (ΔH), and entropy (ΔS) of solvent displacement. Preferential solvation tendencies are quantified for EC:DMC and EC:PC electrolyte using this methodology, and compared with preferences observed by other workers. This thesis provides the framework for future studies on the thermodynamics of more complex battery electrolyte coordination environments and its connection with the SEI composition.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Equity Volatility Dynamics with Markov-Switching EGARCH Models</title>
<link href="https://hdl.handle.net/1721.1/153697" rel="alternate"/>
<author>
<name>Dennis-Sharma, Tyson</name>
</author>
<id>https://hdl.handle.net/1721.1/153697</id>
<updated>2024-03-14T03:35:55Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Forecasting Equity Volatility Dynamics with Markov-Switching EGARCH Models
Dennis-Sharma, Tyson
Understanding and anticipating stock market volatility enables better portfolio management. We forecast US equity volatility with a Markov-Switching EGARCH model with one high and one low volatility regime. We show that this model contains similar information about future volatility as the VIX Index. It also outperforms single-regime GARCH and EGARCH models. Moreover, the model’s 1-day ahead regime predictions are economically significant: market volatility and kurtosis, equity risk premia, and stock-bond relations shift when the model forecasts a regime change.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning and Data-Driven Analysis of Thermal Runaway Characteristics in Lithium-Ion Batteries</title>
<link href="https://hdl.handle.net/1721.1/153695" rel="alternate"/>
<author>
<name>Petersen, Julia</name>
</author>
<id>https://hdl.handle.net/1721.1/153695</id>
<updated>2024-03-14T03:04:44Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Machine Learning and Data-Driven Analysis of Thermal Runaway Characteristics in Lithium-Ion Batteries
Petersen, Julia
This study explores thermal runaway in lithium-ion batteries, particularly examining NCM (Nickel Cobalt Manganese) and NCA (Nickel Cobalt Aluminum) chemistries. Utilizing data analysis and machine learning on approximately 400 data points, it gives insights into thermal runaway dynamics, focusing on characteristic parameters such as onset temperature of self-heating (T1), onset temperature of thermal runaway (T2), maximum temperature during thermal runaway (T3) and mass loss. The investigation revealed that NCA cells are more prone to thermal runaway, exhibiting lower initial self-heating temperatures compared to NCM cells. A notable preliminary finding is the potential link between nickel content in battery chemistries and thermal runaway initiation temperatures. Higher nickel compositions, like in NCM811 and various NCA cells, tend to display lower initial self-heating temperatures, possibly indicating faster progression toward thermal runaway. The limited research on how nickel content specifically influences the onset of self-heating during thermal runaway in battery cells underscores the need for new investigations into the cathode’s role and the factors beyond SEI layer decomposition. Addressing this gap, particularly focusing on the impact of nickel content on the critical onset temperature of exothermic heating that initiates thermal runaway, is essential to deepen our understanding of thermal dynamics and improve battery safety and stability.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Compact Non-Volatile Photonic Switching Based on Optical Phase Change Material and Graphene Heater</title>
<link href="https://hdl.handle.net/1721.1/153693" rel="alternate"/>
<author>
<name>Dao, Khoi Phuong</name>
</author>
<id>https://hdl.handle.net/1721.1/153693</id>
<updated>2024-03-14T03:10:16Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Modeling Compact Non-Volatile Photonic Switching Based on Optical Phase Change Material and Graphene Heater
Dao, Khoi Phuong
On-chip photonic switches are the building blocks for programable integrated circuits (PICs) and the integration of phase change materials (PCMs) enables promising designs which are compact, non-volatile, and efficient. However, conventional PCMs such as Ge₂Sb₂Te₅ (GST) introduce significant optical absorption loss, leading to elevated insertion losses in devices. Current approaches, compensating for this loss through weak evanescent light-PCM interactions, result in larger footprint devices. A compact non-volatile 2 × 2 switch design is introduced, leveraging optical concentration in slot waveguide modes to significantly enhance interactions of light with PCM, thereby realizing a compact, efficient photonic switch. The crystalline-amorphous phase transitions are driven by an integrated single-layer graphene heater, providing high electro-thermal efficiency, low absorption loss, and rapid switching speed. Computational simulations demonstrate reversible phase transitions of Sb₂Se₃ facilitating 2 working states with crosstalk (CT) down to -24 dB at 1550 nm wavelength and more than 55 nm 0.3 dB insertion loss (IL)bandwidth. The proposed photonic switch architecture can constitute the cornerstone for next-generation high-performance reconfigurable photonic circuits.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Soft at the Joints</title>
<link href="https://hdl.handle.net/1721.1/153692" rel="alternate"/>
<author>
<name>Williams, Susan</name>
</author>
<id>https://hdl.handle.net/1721.1/153692</id>
<updated>2024-03-14T04:06:13Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Soft at the Joints
Williams, Susan
A building can be understood entirely through its joints. It can explain gravitational forces, interlacing moments of material application, and environmental conditions. Yet, this portion of the design is often relegated to the end of the design process, as a finishing touch. The walls, floors, and roof are meticulously considered, while the spaces between are left blank in order to accommodate the imperfections and unsolved complexities that occur when the idealism of design meets the reality of assembly.&#13;
&#13;
In 1851, Gottfried Semper proclaimed, “The beginning of building coincides with the beginning of textiles.” Over the past hundred and fifty years this statement has moved in and out of relevancy as manufacturing, digital tools, design trends and the role of designer and builder has changed. Today, architecture’s relationship with textiles is somewhat estranged. Like the joint, textiles appear at the completion of a project’s development, confined to fulfilling an aesthetic role. Textiles are materials with unique properties which allow for both high levels of strength and flexibility all at the same time. Unlike in architecture, in textiles, the interlacing of fabric is the starting point of both design and construction.&#13;
&#13;
This thesis re-envisions new methods of architectural design through the logics of textiles: by applying principles of aggregation, establishing a dependent relationship between material and structure, and designing through making at a one-to-one scale. As a result, this project acts as a catalyst for playful tectonic systems, eliminating the boundary between where the joint begins and where it ends.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Speeding up Housing Supply in Hong Kong through Land Readjustment</title>
<link href="https://hdl.handle.net/1721.1/153690" rel="alternate"/>
<author>
<name>Li, Mingyao</name>
</author>
<id>https://hdl.handle.net/1721.1/153690</id>
<updated>2024-03-14T03:43:26Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Speeding up Housing Supply in Hong Kong through Land Readjustment
Li, Mingyao
For over a decade, Hong Kong's housing has been ranked as the least affordable globally. “Pricy and cramped” living conditions have increasingly become a pressing social issue concerning the public at large. Explanations of this housing issue are multi-faceted, among which the most fundamental cause is the insufficient supply of developable land. In response to this shortage, the Hong Kong government has passed a controversial bill to develop a large-scale reclamation project, costing more than US$50 billion to build. Nevertheless, a massive amount of land in the rural New Territories remains idling or underutilized due to convoluted history and ownership. The housing crisis may be eased more effectively if solutions can be formulated to make these lands developable.&#13;
&#13;
This thesis focuses on understanding the context, characteristics, and limiting factors affecting the development potential of these rural lands. Correspondingly, a land management mechanism – Land Readjustment – will be introduced as a feasible tool to overcome major obstacles.&#13;
&#13;
Chapter I – Hong Kong: Calling for a Solution to the Land Supply Problem introduces current land and housing supply issues and elaborates on how different land supply mechanisms have failed to create sufficient land for housing development. Then, the root cause on a theoretical level is explained – bilateral monopoly and constituency effect are the main predicaments paralyzing the Hong Kong land supply system. A practical solution will require breaking the gridlock inherent in current power dynamics.&#13;
&#13;
Chapter II – Land Readjustment: A Possible Solution brings forth Land Readjustment as a potential tool to address the land supply problem. As Land Readjustment is a relatively unfamiliar concept in the U.S., a brief introduction explaining the rationale is presented. Embedded in its characteristics are the benefits it can realize and objectives it can achieve, which are regarded as valuable, as they are aligned with major obstacles the government faces in developing rural land in Hong Kong. As Land Readjustment does not directly lead to housing affordability, a separate discussion is dedicated to different ways to create affordable housing within the framework of Land Readjustment.&#13;
&#13;
Chapter III – Applying Land Readjustment in Hong Kong focuses on drawing a tighter connection between the problem and the solution. The first evaluation is whether Hong Kong can meet all the pre-conditions to qualify for implementation of Land Readjustment. Second, ex-post performance evaluation frameworks are adapted to an ex-ante assessment of whether a satisfactory outcome could be achieved through Land Readjustment. Third, through international case studies, more practical mechanisms are incorporated to generate a bespoke proposal to address the unique conditions in Hong Kong.&#13;
&#13;
To summarize, applying Land Readjustment to speed up the housing supply in Hong Kong is a feasible proposal. It can not only promote private participation to expedite land development with equitable sharing of costs and benefits but also contribute to untangling the long-lasting impasse among the Rural Committee, private developers, and the government against the backdrop of criticisms of real estate hegemony. Most importantly, the development potential of rural New Territories can be unleashed. Hong Kong youth may see a glimmer of hope for owning their first house sooner and with better quality.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Profit Real Estate: Financial Strategies for Mission and Impact</title>
<link href="https://hdl.handle.net/1721.1/153687" rel="alternate"/>
<author>
<name>Cha, Yoon</name>
</author>
<id>https://hdl.handle.net/1721.1/153687</id>
<updated>2024-03-14T03:31:53Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Non-Profit Real Estate: Financial Strategies for Mission and Impact
Cha, Yoon
This thesis examines how land-endowed nonprofits can optimize their assets to better serve their mission and unlock value for the communities they serve. After exploring various real estate strategies and partnership structures among nonprofit, for-profit, private, and public entities related to nonprofit land use, the thesis will apply its lessons to a detailed case study of Cambridge Young Women’s Christian Association to inform short and long-term real estate policies that compliment and maximize the organization’s mission impact.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing the Role of Amines in Aqueous Electrochemical Reduction of Captured-state CO₂</title>
<link href="https://hdl.handle.net/1721.1/153684" rel="alternate"/>
<author>
<name>Bernhardt, Elizabeth M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153684</id>
<updated>2024-03-14T03:40:07Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Probing the Role of Amines in Aqueous Electrochemical Reduction of Captured-state CO₂
Bernhardt, Elizabeth M.
Integrating CO₂ capture and CO₂ conversion into a single reactor presents an opportunity to power the combined process with renewable electricity. The integration of these two historically separate technologies establishes a large system parameter space, which introduces many handles for system optimization, but also presents many challenges. As the capture medium takes on the dual role of sorbent and electrolyte, a complex landscape of potential reaction pathways emerges. Before these integrated systems can be engineered to perform at industrial scales, we must better understand the speciation and characteristics of the capture medium, as well as its impact on transport and interfacial properties. The integration of CO₂ capture and conversion processes has primarily been investigated in aqueous, amine-based solutions to draw on the maturity of amine chemistry in CO₂ capture. However, when subjected to reducing currents, the aqueous solvent provides a pathway for parasitic hydrogen evolution. Additionally, amines become ion pairs upon uptake of CO₂, giving them the opportunity to act as both reactant and supporting electrolyte. We approach the complexity of these systems by investigating the influence of amine choice on electrochemical performance. Primarily, we explore how amine physicochemical properties, namely steric hindrance and pKₐ, impact speciation, product selectivity, and cell performance. We chose a subset of primary alkylamines with varied steric hindrance and pKₐ and evaluated each on the basis of Faradaic efficiency, partial current density to reduced products, and the dynamics of product formation on Ag-based electrocatalysts. Through these measurements, we elucidate trends in the competition of hydrogen evolution and carbon monoxide formation as a function of amine pKₐ and steric hindrance in order to inform the choice of sorbent-electrolyte for industrially integrated amine-based CO₂ capture and conversion.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian Inference and Experimental Design of Combustion Kinetic Models</title>
<link href="https://hdl.handle.net/1721.1/153682" rel="alternate"/>
<author>
<name>Chen, Huaibo</name>
</author>
<id>https://hdl.handle.net/1721.1/153682</id>
<updated>2024-03-14T03:36:05Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Bayesian Inference and Experimental Design of Combustion Kinetic Models
Chen, Huaibo
In combustion kinetic model calibration, researchers usually use experimental data to reduce the uncertainty of kinetic parameters, and Bayesian inference is the most common approach to do inverse calibration. This thesis explores two interconnected aspects of Bayesian approaches in the context of combustion kinetics models: how to utilize high-resolution species profiles in Bayesian inference and how to identify the most informative experimental conditions to collect data. In the first part, we investigated the impact of the effective independent-data number and target selection on Bayesian inference of kinetic parameters using species time-histories obtained from shock tube experiments. Neural networks serve as response surfaces. Maximum a posteriori estimation and Markov chain Monte Carlo sampling are employed to determine optimal parameters as well as their uncertainty. Three optimization strategies are employed: utilizing the entire species time-history curve with effective independent-data numbers of 1 (C-1) and 160 (C-160), and using only the last point of each curve (LastP). All three improved models fit experimental data better. Comparing C-1 with C-160 reveals that increasing the number of targets improves prediction accuracy but may lead to overtuning. Comparing C-1 with LastP, LastP exhibits comparable or slightly better agreement with measurements, suggesting that focusing on critical points is effective for point estimation. However, C-1 shows different posterior uncertainty from LastP in both parameters and predictions, despite their similarity in the point estimation.&#13;
&#13;
Experimental data obtained at different experimental conditions (e.g., pressure, temperature, equivalence ratio, etc.) is not equally informative when it is used to calibrate kinetic parameters. Thus, experimental design becomes an important topic in combustion kinetics, where the most informative condition can be identified by algorithms. In the second part, we propose an efficient Bayesian experimental design algorithm that integrates Laplacian approximation-based experimental design with gradient-based design optimization, employing sophisticated neural network response surfaces for mapping kinetic parameters to target prediction at a wide range of thermodynamic conditions. The algorithm demonstrates efficiency and robustness against local maxima. Additionally, to meet various needs in kinetic experiments, we develop various experimental design targets based on the posterior covariance matrix, including model-oriented, parameter-oriented, target-oriented, and parallel experimental design. The proposed method, utilizing a full posterior covariance matrix without fixing any parameter of insensitive reactions, achieves significant acceleration compared to previous methods, demonstrating effectiveness in reducing parameter and target uncertainty as well as designing multiple experiments simultaneously.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Machine Learning Approach to Improve Diameter Control in Desktop Fiber Extrusion Processes</title>
<link href="https://hdl.handle.net/1721.1/153677" rel="alternate"/>
<author>
<name>Patrick, Keeghan J.</name>
</author>
<id>https://hdl.handle.net/1721.1/153677</id>
<updated>2024-03-14T03:01:01Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">A Machine Learning Approach to Improve Diameter Control in Desktop Fiber Extrusion Processes
Patrick, Keeghan J.
A machine learning approach to controlling the diameter of a desktop fiber extrusion process with a PLC is developed and evaluated against the performance of PID control. The deep reinforcement learning model can learn how to control the output diameter of the process based on a given target without any knowledge of the system dynamics. The model learns how to control the output diameter after being trained on hours of data recorded from an open loop control process. After training the model can receive sensory information from a PLC, calculate an action based on the desired target and send the action to the PLC to execute.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How actors and groups in the family business system  influence innovation in the family business:  an analytical framework</title>
<link href="https://hdl.handle.net/1721.1/153672" rel="alternate"/>
<author>
<name>Vanparys, Thierry F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153672</id>
<updated>2024-03-14T04:02:09Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">How actors and groups in the family business system  influence innovation in the family business:  an analytical framework
Vanparys, Thierry F.
Innovation is understood to be vital to the prosperity and survival of family businesses and there is great value for practitioners, advisors, researchers, and academics in understanding how innovation occurs in family businesses—in a clear and practical way. I provide a framework that aides in shedding light on how and by whom innovation may be enacted, promoted, and supported in the family business system.&#13;
&#13;
The family business literature offers clear and practical models explaining that the family business must be understood in the context of the family business system, which includes the business organization, the owners of the business, and the family that has ownership control of the business.  Frameworks also explain how this system may be affected by how a family in business changes over time. These are demonstrated by the “Three-Circle Model” and the “Three-Dimension Developmental Model” of the family business respectively. The literature on innovation is extensive, albeit, as a body, much of it is confusing and unfortunately impractical for consistent application across the family business system. Recognising this, we focus our discussion and draw out two taxonomies from the literature, chosen for their accessibility and applicability and the crispness with which they allow us to talk about innovation. I then focus on one taxonomy and connect it back to the actors and groups in the family business system to establish our analytical framework. I believe the latter and its practical, actionable orientation to be a valuable addition to the literature.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Robust Inventory Induction under Demand Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/153669" rel="alternate"/>
<author>
<name>Robin, Arnaud</name>
</author>
<id>https://hdl.handle.net/1721.1/153669</id>
<updated>2024-03-14T03:10:33Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">Robust Inventory Induction under Demand Uncertainty
Robin, Arnaud
E-commerce retailers need to meet growing demand and rising customer expectations while efficiently managing operating costs across global supply chains. This thesis addresses the tactical problem of inventory induction under demand uncertainty, which involves determining where the position incoming inventory to serve future customer demand. We formulate the problem via two-stage adaptive robust optimization with right-hand side uncertainty. First-stage variables characterize initial induction and positioning and second-stage variables capture subsequent rebalancing and order fulfillment. Demand is modeled via an uncertainty set based on an aggregate forecast---at the nation-wide and monthly level---to protect against spatiotemporal deviations---at the local and daily level. We develop a Benders decomposition algorithm, iterating between a lower-bounding master problem and an upper-bounding subproblem. We accelerate the Quadratically Constrained Quadratic Problem (QCQP) subproblem with primal heuristics and dual-bounding strategies---including a novel simplicial relaxation. We also propose a cut-learning strategy from offline instances to warm-start the Benders decomposition scheme. We conduct extensive computational experiments, leveraging an experimental setup build on real-world data and developed in collaboration with a major e-commerce provider. From a computational standpoint, results show the benefits of the acceleration strategies for the subproblem and the master problem which, together, outperform state-of-the-art benchmarks in terms of optimality gaps, solution quality, and computational times. From a practical standpoint, results suggest that the adaptive robust solution can provide significant benefits on average against the deterministic benchmark, by mitigating operating costs by up to 5-10% and improving delivery speeds by up to 1%.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>T2 Characterization of Oil-In-Water Emulsions for NMR Sensor Applications</title>
<link href="https://hdl.handle.net/1721.1/153667" rel="alternate"/>
<author>
<name>Zammit, Alexa S.</name>
</author>
<id>https://hdl.handle.net/1721.1/153667</id>
<updated>2024-03-14T03:32:27Z</updated>
<published>2024-02-01T00:00:00Z</published>
<summary type="text">T2 Characterization of Oil-In-Water Emulsions for NMR Sensor Applications
Zammit, Alexa S.
Fluid status assessment is an essential aspect of healthcare with implications in chronic conditions such as renal disease and congestive heart failure. Current fluid status determination techniques lack quantitative methods and standards. Our research explores a point-of-care approach through a portable single-sided magnetic resonance (MR) sensor. We are developing a more accurate and clinically relevant hydration metric through measuring localized skeletal muscle. Phantoms are used as stand-ins for a human subject to calibrate and ensure system functionality. The microstructure of an emulsion also mimics the multiple compartments of tissue such as the intra and extracellular volumes of muscle and adipose tissue. We aim to use oil-in-water emulsions as phantoms to ensure device reproducibility and determine how much the scale of the microstructure affects relaxation behavior. A quantitative understanding of the length scales appropriate for muscle and adipose tissue will help determine the reliability of our hydration measurement.
</summary>
<dc:date>2024-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Commercial production of carbon free chromium or ferrochrome by leaching from the ore and electrolysis</title>
<link href="https://hdl.handle.net/1721.1/153595" rel="alternate"/>
<author>
<name>Crafts, Walter.</name>
</author>
<id>https://hdl.handle.net/1721.1/153595</id>
<updated>2025-10-30T15:50:06Z</updated>
<published>1926-01-01T00:00:00Z</published>
<summary type="text">Commercial production of carbon free chromium or ferrochrome by leaching from the ore and electrolysis
Crafts, Walter.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1926; Includes bibliographical references (leaf 30).
</summary>
<dc:date>1926-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Air conditioning of railway passenger cars</title>
<link href="https://hdl.handle.net/1721.1/153593" rel="alternate"/>
<author>
<name>Steenkamp, W. L.</name>
</author>
<id>https://hdl.handle.net/1721.1/153593</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1939-01-01T00:00:00Z</published>
<summary type="text">Air conditioning of railway passenger cars
Steenkamp, W. L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1939; Includes bibliographical references (leaves 184-188).
</summary>
<dc:date>1939-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trapping and discharge of megavolt electrons in solid dielectrics</title>
<link href="https://hdl.handle.net/1721.1/153592" rel="alternate"/>
<author>
<name>Chang, William Wai.</name>
</author>
<id>https://hdl.handle.net/1721.1/153592</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">Trapping and discharge of megavolt electrons in solid dielectrics
Chang, William Wai.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1963; Includes bibliographical references (leaf 58).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A strategic analysis for a Chilean steel castings firm</title>
<link href="https://hdl.handle.net/1721.1/153590" rel="alternate"/>
<author>
<name>Armas, Juan Pablo.</name>
</author>
<id>https://hdl.handle.net/1721.1/153590</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">A strategic analysis for a Chilean steel castings firm
Armas, Juan Pablo.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1992; Includes bibliographical references (leaf 98).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The transfer of managerial skills through United States enterprises in the developing nations.</title>
<link href="https://hdl.handle.net/1721.1/153587" rel="alternate"/>
<author>
<name>Khalifa, Ahmes Mohamed Said.</name>
</author>
<id>https://hdl.handle.net/1721.1/153587</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1968-01-01T00:00:00Z</published>
<summary type="text">The transfer of managerial skills through United States enterprises in the developing nations.
Khalifa, Ahmes Mohamed Said.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1968; Bibliography: leaves 97-98.
</summary>
<dc:date>1968-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Outsourcing railway engineering</title>
<link href="https://hdl.handle.net/1721.1/153585" rel="alternate"/>
<author>
<name>Heavin, Jerry W.
            (Jerry Wayne)</name>
</author>
<id>https://hdl.handle.net/1721.1/153585</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1988-01-01T00:00:00Z</published>
<summary type="text">Outsourcing railway engineering
Heavin, Jerry W.
            (Jerry Wayne)
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1988; Bibliography: leaves 162-163.
</summary>
<dc:date>1988-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PRMT5 Inhibitors in Merkel Cell Carcinoma</title>
<link href="https://hdl.handle.net/1721.1/153471" rel="alternate"/>
<author>
<name>Higgins, Kathleen Whitmore</name>
</author>
<id>https://hdl.handle.net/1721.1/153471</id>
<updated>2024-02-09T03:08:01Z</updated>
<published>2021-09-01T00:00:00Z</published>
<summary type="text">PRMT5 Inhibitors in Merkel Cell Carcinoma
Higgins, Kathleen Whitmore
Merkel Cell Carcinoma (MCC) is a rare neuroendocrineskin cancer.  Treatment options are limited, and they are largely based on MCC’s similarity to other cancers, rather than original research. Many of these treatments have low efficacy and significant side effects, and the overall prognosis remains bleak.  In this thesis, I will propose a new therapeutic strategy for MCC based on chemical inhibition of protein arginine methyltransferase 5 (PRMT5).  PRMT5 inhibitors are already being tested in a variety of other solid and liquid tumors to good effect. Our data suggest that PRMT5 inhibitors may be effective in treating a specific subtype of MCC defined by a viral driver and wildtype p53.  Treatment inhibits growth in vitro and results in large changes in alternative splicing and more subtle changes in oxidative metabolism. Furthermore, we observe differential alternative splicing of the p53-regulator MDM4, suggesting a possible mechanism for the drug’s greater efficacy in p53-wildtype cell lines.
</summary>
<dc:date>2021-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Creating a New Malaria Vaccine Design that uses a Blood Stage P. falciparum Chassis for Non-Blood Stage Antigen Presentation</title>
<link href="https://hdl.handle.net/1721.1/153463" rel="alternate"/>
<author>
<name>Parker, Shelbi Nicole</name>
</author>
<id>https://hdl.handle.net/1721.1/153463</id>
<updated>2024-02-09T03:47:55Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Creating a New Malaria Vaccine Design that uses a Blood Stage P. falciparum Chassis for Non-Blood Stage Antigen Presentation
Parker, Shelbi Nicole
Malaria is a global disease that affects millions annually and the complex life cycle of the Plasmodium species that cause malaria results in increasing drug resistance and poor vaccine efficacy. Current vaccine designs focus on a single stage in the parasite life cycle and antibody responses are inefficient in offering protection, leading to “malaria rebound” as a lack of immune response to multiple stages of the life cycle result in case numbers returning to their levels before intervention. In this work, we utilize a blood stage parasite to present infection and transmission stage antigens. Plasmids using the conditional translation repressor system TetR-DOZI were created, and transgenic parasites that express the scaffold protein eTRAMP4 fused to either CSP or P25 were generated. We assessed the transgenic parasites for growth defects, proper fusion length, and localization to the parasitophorous vacuolar membrane. We also removed parasites from host red blood cells and examined two purification methods in the pipeline of developing a pure, intact culture of transgenic parasites. The methods and results of this work set the stage for a new malaria vaccine design that has the potential to fill the gap of current vaccine technologies.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Trailer-on-flat-car service in the perspective of competition for freight traffic</title>
<link href="https://hdl.handle.net/1721.1/153441" rel="alternate"/>
<author>
<name>Davis, John Christy.</name>
</author>
<id>https://hdl.handle.net/1721.1/153441</id>
<updated>2026-02-06T05:16:59Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Trailer-on-flat-car service in the perspective of competition for freight traffic
Davis, John Christy.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1956; Bibliography: leaves 111-116.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving the Future of Long-Haul Trucking: Realizing the Potential of Battery Electric Vehicles through an Analysis of Financial and Environmental Impacts</title>
<link href="https://hdl.handle.net/1721.1/153407" rel="alternate"/>
<author>
<name>Chehrazi, Natalie</name>
</author>
<id>https://hdl.handle.net/1721.1/153407</id>
<updated>2024-01-25T03:39:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Driving the Future of Long-Haul Trucking: Realizing the Potential of Battery Electric Vehicles through an Analysis of Financial and Environmental Impacts
Chehrazi, Natalie
This thesis examines the transition to battery electric vehicles (BEVs) for long-haul trucking, using system dynamics modeling, financial impact modeling, and environmental impact modeling, and looks across a broad range of possible future scenarios that could impact the viability of BEV use in long-haul trucks. System dynamics modeling, with causal loops, is used to identify key factors influencing adoption rates. Results show that battery capabilities, the total cost of ownership, and feedback loops are critical considerations in increasing BEV adoption. Environmental impact analysis demonstrates that transitioning to BEVs can lead to significant and immediate reductions in emissions. If the transition occurs now, with current development, there would be an immediate 37% reduction in GHG emissions and an 85% reduction in all direct emissions from air pollutants, not including SO2 emissions. If the medium or aggressive development scenarios outlined in this paper occur, there would be a 60% reduction in GHG emissions and a 90% reduction in all direct emissions from air pollutants, not including SO2 emissions.&#13;
&#13;
These reductions could be vital in addressing emissions in this sector and helping curb climate change. Payload impact analysis demonstrates that the additional battery weight in a BEV long-haul truck would not be an issue for 93% of long-haul trucks. Financial impact analysis indicates that if charging capabilities increase to 500kW or above, BEVs are a better investment across all economic scenarios over the years of ownership, driven by lower operating costs. If no further development in charging capability occurs, the economic benefits of transitioning are subject to market conditions. Regardless of charging station capability development, if the price of diesel fuel remains above US$3.65 per gallon, BEVs are the preferred investment. Additionally, comprehensive net present value (NPV) analysis is used to demonstrate whether BEV long-haul trucks are a good investment for both the trucking industry and partner companies depending on various economic and development speed scenarios.&#13;
&#13;
In current economic scenarios with no further development, BEV long-haul trucks are a good investment for both the trucking industry and partner companies, with net financial gains of $59K with a payback period of 5 years or $77K with a payback period of 4 years respectively. It is also significant to note that these calculations use transportation end consumer electricity prices and do not include subsidies or incentives. By sourcing energy differently and utilizing renewable energy sources, companies can substantially decrease operating costs, making the transition to BEVs even more financially viable than presented. With subsidies and incentives in place, the case for BEV long-haul trucks is further strengthened. The thesis also includes a specific analysis of the Tesla semi-truck with a fuel economy of 19.8MPGe. This Tesla semi-truck analysis revealed that regardless of charger development, the Tesla semi-truck would be a better investment than an ICE long-haul truck for both the trucking industry and partner companies.&#13;
&#13;
Additionally, the analysis in this thesis suggests that there are significant benefits to increasing charging capabilities to 500kW, which would reduce charging downtime from 4 hours to approximately 2 to 2.5 hours per full charge. Even with the significant downtime, such an increase in charging capabilities would make the BEV long-haul truck the better investment in all feasible projected economic scenarios. The thesis concludes that the case for BEV long-haul trucks is clear, and there is significant potential to accelerate and capitalize on the transition to BEVs in the long-haul trucking industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The development of an automatic curve-following mechanism</title>
<link href="https://hdl.handle.net/1721.1/153376" rel="alternate"/>
<author>
<name>Traver, Harold A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153376</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1933-01-01T00:00:00Z</published>
<summary type="text">The development of an automatic curve-following mechanism
Traver, Harold A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1933
</summary>
<dc:date>1933-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Calibration of standard stars for planetary reflectivitiy studies.</title>
<link href="https://hdl.handle.net/1721.1/153373" rel="alternate"/>
<author>
<name>Elias, Jonathan H.</name>
</author>
<id>https://hdl.handle.net/1721.1/153373</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Calibration of standard stars for planetary reflectivitiy studies.
Elias, Jonathan H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Earth and Planetary Science, 1972; Bibliography: leaves 89-93.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Vibrational characteristics of building frames</title>
<link href="https://hdl.handle.net/1721.1/153372" rel="alternate"/>
<author>
<name>Haba, Mohamed.</name>
</author>
<author>
<name>Dloomy, Naim Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/153372</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1946-01-01T00:00:00Z</published>
<summary type="text">Vibrational characteristics of building frames
Haba, Mohamed.; Dloomy, Naim Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil and Sanitary Engineering, 1946; Bibliography: leaf 74.
</summary>
<dc:date>1946-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of the scattered-wave cluster method to zinc sulfide.</title>
<link href="https://hdl.handle.net/1721.1/153371" rel="alternate"/>
<author>
<name>Kim, Hwasoo Park.</name>
</author>
<id>https://hdl.handle.net/1721.1/153371</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Application of the scattered-wave cluster method to zinc sulfide.
Kim, Hwasoo Park.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Studies on the biosynthesis and structure of the acidic brain protein.</title>
<link href="https://hdl.handle.net/1721.1/153369" rel="alternate"/>
<author>
<name>King, William Francis.</name>
</author>
<id>https://hdl.handle.net/1721.1/153369</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Studies on the biosynthesis and structure of the acidic brain protein.
King, William Francis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Biology, 1972; Bibliography: leaves 32-33.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods of quantitative analysis for clinical phonoangiography.</title>
<link href="https://hdl.handle.net/1721.1/153367" rel="alternate"/>
<author>
<name>Klitzner, Thomas Samuel.</name>
</author>
<id>https://hdl.handle.net/1721.1/153367</id>
<updated>2025-10-30T15:50:02Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Methods of quantitative analysis for clinical phonoangiography.
Klitzner, Thomas Samuel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1972; Bibliography: leaves 66-67.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Actor machine architecture.</title>
<link href="https://hdl.handle.net/1721.1/153364" rel="alternate"/>
<author>
<name>Steiger, Richard John.</name>
</author>
<id>https://hdl.handle.net/1721.1/153364</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1974-01-01T00:00:00Z</published>
<summary type="text">Actor machine architecture.
Steiger, Richard John.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1974; Bibliography: leaves 183-184.
</summary>
<dc:date>1974-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A control system for isolated muscle experiments.</title>
<link href="https://hdl.handle.net/1721.1/153363" rel="alternate"/>
<author>
<name>Kleinbaum, Jerry Israel.</name>
</author>
<id>https://hdl.handle.net/1721.1/153363</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A control system for isolated muscle experiments.
Kleinbaum, Jerry Israel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Bibliography: leaf 93.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-noise broadband I. F. amplifiers for radiometric receivers.</title>
<link href="https://hdl.handle.net/1721.1/153362" rel="alternate"/>
<author>
<name>Kjartansson, Vilhjalmur Thor.</name>
</author>
<id>https://hdl.handle.net/1721.1/153362</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Low-noise broadband I. F. amplifiers for radiometric receivers.
Kjartansson, Vilhjalmur Thor.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A simulated forecast of the joint frequency distribution of any four-magazine campaign using readership survey data.</title>
<link href="https://hdl.handle.net/1721.1/153361" rel="alternate"/>
<author>
<name>Klapfish, Maurice S.</name>
</author>
<id>https://hdl.handle.net/1721.1/153361</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A simulated forecast of the joint frequency distribution of any four-magazine campaign using readership survey data.
Klapfish, Maurice S.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Maximizing communications in an R and D laboratory through computerized relative allocation of facilities.</title>
<link href="https://hdl.handle.net/1721.1/153360" rel="alternate"/>
<author>
<name>Klurfeld, Laurence Franklin.</name>
</author>
<id>https://hdl.handle.net/1721.1/153360</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Maximizing communications in an R and D laboratory through computerized relative allocation of facilities.
Klurfeld, Laurence Franklin.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Bibliography: leaf 44.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The lease - purchase phenomenon in the capital goods market.</title>
<link href="https://hdl.handle.net/1721.1/153359" rel="alternate"/>
<author>
<name>Kirby, Marvin Goodloe.</name>
</author>
<id>https://hdl.handle.net/1721.1/153359</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">The lease - purchase phenomenon in the capital goods market.
Kirby, Marvin Goodloe.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Bibliography: leaves 110-111.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mathematical madel for screening storm water control alternatives.</title>
<link href="https://hdl.handle.net/1721.1/153358" rel="alternate"/>
<author>
<name>Kirshen, Paul H.</name>
</author>
<id>https://hdl.handle.net/1721.1/153358</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Mathematical madel for screening storm water control alternatives.
Kirshen, Paul H.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1972; Bibliography: leaves 120-123.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating queries concurrently in a shared database system</title>
<link href="https://hdl.handle.net/1721.1/153354" rel="alternate"/>
<author>
<name>Danberg, Seymour A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153354</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1978-01-01T00:00:00Z</published>
<summary type="text">Evaluating queries concurrently in a shared database system
Danberg, Seymour A.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1978; Bibliography: leaves 87-89.
</summary>
<dc:date>1978-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of technology trajectories for industrial applications of the indirect dimensional acquisition industry</title>
<link href="https://hdl.handle.net/1721.1/153353" rel="alternate"/>
<author>
<name>Indest, William L.
            (William Logan),
            1963-</name>
</author>
<id>https://hdl.handle.net/1721.1/153353</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1999-01-01T00:00:00Z</published>
<summary type="text">An analysis of technology trajectories for industrial applications of the indirect dimensional acquisition industry
Indest, William L.
            (William Logan),
            1963-
Thesis: S.M.M.O.T., Massachusetts Institute of Technology, Sloan School of Management, Management of Technology Program, 1999; Vita.; Includes bibliographical references.
</summary>
<dc:date>1999-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Subglacial Hydrology in the Himalayas</title>
<link href="https://hdl.handle.net/1721.1/153347" rel="alternate"/>
<author>
<name>Narayanan, Neosha Gupta</name>
</author>
<id>https://hdl.handle.net/1721.1/153347</id>
<updated>2024-01-17T03:31:37Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling Subglacial Hydrology in the Himalayas
Narayanan, Neosha Gupta
The snowpack and glaciers of the Himalaya-Karakoram range feed several major river systems in Asia which provide water to over a billion people. Glacial retreat, glacial lake outburst flooding (GLOFs), surge behavior, and glacial ice mass balance are all likely strongly affected by subglacial hydrology. Unfortunately, little is known about Himalayan glaciers due to their remoteness and the danger of doing field work there. Recent advances in subglacial hydrological modeling may allow us to shed more light on subglacial processes that lead to changes in ice mass balance and glacial lake flooding. In this master's thesis, we present the first application of the SHAKTI subglacial hydrology model to a Himalayan glacier. We model the subglacial drainage network of Shishper Glacier, located in Gilgit-Baltistan, Pakistan, to understand its seasonal evolution and history of surges and GLOFs. Our results show that Shishper's subglacial system follows a similar seasonal pattern to past observed and modeled subglacial systems. We find that a central channel persists through the winter and serves as the basis for the subglacial drainage system throughout the melt season. We also investigate the 2017-2019 surge of Shishper Glacier and find that subglacial hydrology, while likely an important component of surging, cannot provide a standalone explanation for surges. This work serves as a nucleus for future subglacial hydrology modeling work in the Himalayas and provides a new framework for studying the effects of climate change on glacier dynamics, water availability, and glacier-related hazards in the Himalaya-Karakoram (H-K) region.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-throughput Photodegradation of Plastics</title>
<link href="https://hdl.handle.net/1721.1/153345" rel="alternate"/>
<author>
<name>Frankson, Alexis</name>
</author>
<id>https://hdl.handle.net/1721.1/153345</id>
<updated>2024-01-17T03:38:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High-throughput Photodegradation of Plastics
Frankson, Alexis
Plastic is a critical resource in the modern world, but an emphasis on durability in design coupled with the widespread use of plastic products has led to significant accumulation of plastic waste in the environment. It is imperative that new chemistries are discovered to produce polymers with the correct properties to meet consumer demands, but with a finite and well understood lifetime in the environment. This thesis aims to evaluate the rate of abiotic degradation of different plastics using a high throughput photo-reactor, to better understand the rate at which plastic will degrade due to ultraviolet light exposure based on polymer type and properties. The research findings suggest that the photo-degradability of polymers is impacted by the presence of chromophores and the presence of impurities from manufacturing. The experiments were performed on a small range of the most common consumer plastics, but the methodology developed can be used to design more efficient degradation experiments. Continued research into the factors impacting degradation in laboratory settings and in the natural environment are needed to promote the development of more environmentally sustainable polymers.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Looking for Pirdoudan: The Past, Present, and Future of Mining in Armenia</title>
<link href="https://hdl.handle.net/1721.1/153344" rel="alternate"/>
<author>
<name>Vosgueritchian, Sarine Gacia</name>
</author>
<id>https://hdl.handle.net/1721.1/153344</id>
<updated>2024-01-17T03:43:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Looking for Pirdoudan: The Past, Present, and Future of Mining in Armenia
Vosgueritchian, Sarine Gacia
In our anthropogenic age, data and memory accumulate and decay faster than we can recall. Depiction of history is usually political and hierarchical, emphasizing chosen moments to build narratives, but time has shown us how that can lead to inaccurate accounts of the past. Historians and researchers constantly undo these narratives by consulting different forms of memory from collective to individual, using physical and virtual artifacts. With the accelerating global climate crisis, it is imperative to project further into the future while remaining deeply rooted in the histories and futures of the past. To do so, we need to understand the processes of change that have led to the construction of our current reality. But what happens if the archive is constantly deteriorating?&#13;
&#13;
Set in what is known today as the mining town of Kajaran, Looking for Pirdoudan uses the medium of film and textual essay to piece together and reinterpret the processes of change which have led to the disappearance of mount Pirdoudan after large deposits of copper and molybdenum were discovered in the 19th century. The extraction of the geological layers of Pirdoudan has effectively erased millennials of memory retained by the earth. While geological studies have allowed us to date these layers and put meaning to the accumulations, scattered archival records and media are today’s most readily available material that allow us to piece together the narratives of our past and present moment. That said, archives and data don’t tell a story on their own. A seeker from 2086 takes on the task of weaving an alternative history of Pirdoudan. Critical fabulation is employed, not only to visualize the gaps in our knowledge, but also to project a post-mine future of Kajaran based on a deep understanding and interpretation of the past. Kajaran is rebranded as an ideal ecological city attempting to repair its extractive legacy, but even with the best intentions, driven by technological advancements which are meant to reverse the anthropogenic footprint on the land, a new cycle of destruction begins.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Estimating and Optimizing Throughput in an Aluminum Rolling Mill Using Capacity Modeling and Optimization Techniques</title>
<link href="https://hdl.handle.net/1721.1/153342" rel="alternate"/>
<author>
<name>Hungerford, Scott</name>
</author>
<id>https://hdl.handle.net/1721.1/153342</id>
<updated>2024-01-17T03:34:05Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Estimating and Optimizing Throughput in an Aluminum Rolling Mill Using Capacity Modeling and Optimization Techniques
Hungerford, Scott
The aluminum industry has sustained continuous growth since 1975 and expects to continue this trend with the increased popularity of electric vehicles. With these forecasts in place and the current market conditions, Commonwealth Rolled Products (CRP) is in a unique position to meet the increased market demand and supply auto and industrial product manufacturers with aluminum rolled products. In order for CRP to be able to meet the increased demand, they first must understand the full complexities of their operations and confidently estimate future volumetric capacity they are able to sell.&#13;
&#13;
The objective of the internship program with CRP is to provide a quantitative analysis on the current state and future state throughput of the complex continuous line (CCL). The analysis includes a heuristic model to determine the throughput and identify key performance indicators (KPIs) that impact throughput improvement the greatest. This model will recommend a roadmap to achieve a sustainable operations plan and sales forecast that will enable increased manufacturing capabilities.&#13;
&#13;
In addition to the heuristic model, a mixed integer program (MIP) will be developed to optimally schedule the product mix to reduce production hours lost to product changeover time. The scheduling of a CCL is considered a single machine scheduling problem (SMSP), and the introduction of transition coils is considered a sequence-dependent setup times (SDSTs) problem. This last portion of the paper will focus on the MIP application to optimally schedule the CCL to reduce transition coils.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An implantable piezoelectric ultrasound stimulator (ImPULS) for selective deep brain activation</title>
<link href="https://hdl.handle.net/1721.1/153341" rel="alternate"/>
<author>
<name>Hou, Jason F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153341</id>
<updated>2024-01-17T03:18:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An implantable piezoelectric ultrasound stimulator (ImPULS) for selective deep brain activation
Hou, Jason F.
Precise neurostimulation has potential to revolutionize therapies for neurological disorders. However, current neural interfaces targeting the deep brain face significant limitations in spatial resolution and potency due to tissue attenuation. We developed an implantable piezoelectric ultrasound stimulator (ImPULS) that generates an ultrasonic focal point pressure of 100 kPa and can non-genetically modulate the activity of neurons. We demonstrated that ImPULS can i) excite neurons in a mouse hippocampal slice ex vivo, ii) activate cells in the hippocampus of an anesthetized mouse to induce expression of activity-dependent gene c-Fos, and iii) stimulate dopaminergic neurons in the substantia nigra pars compacta (SNc) to elicit time-locked modulation of nigrostriatal dopamine release. This work introduces a novel, non-genetic ultrasound platform for spatially localized neural stimulation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Effects of Pre-Training and Fine-Tuning CLIP with Domain-Specific Data</title>
<link href="https://hdl.handle.net/1721.1/153338" rel="alternate"/>
<author>
<name>Wang, Jialan</name>
</author>
<id>https://hdl.handle.net/1721.1/153338</id>
<updated>2024-01-17T03:51:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Effects of Pre-Training and Fine-Tuning CLIP with Domain-Specific Data
Wang, Jialan
Mercari is an online two-sided marketplace that allows users to both sell and purchase items. To create the most efficient item listing process for the sellers and bring the most relevant items to the buyers, Mercari utilizes a pre-trained model called Contrastive Language-Image Pre-training (CLIP), famed for its exceptional zero-shot performances, to support the auto-filling feature for item listing and similar items recommendation. As this model is pre-trained on a general dataset gathered from the Internet, which likely does not have the same data distribution as Mercari’s data and results in non-optimal performance, we would like to explore the possibility of pre-training or fine-tuning CLIP with Mercari’s data to improve its performance within Mercari’s data domain. We explore various training strategies to understand the effects of each and determine the most effective strategy. Our best-performing and most space-efficient model achieves a brand prediction top-1 accuracy of 89.34% with 49.89% coverage and a category prediction accuracy of 78.02% with 69.62% coverage, significantly outperforming the current zero-shot CLIP in brand prediction and marginally in category prediction. Moreover, it achieves this with an embedding size that is half of that of the original CLIP.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Emergence: Speculative Ecologies  &amp; Evolution in Art</title>
<link href="https://hdl.handle.net/1721.1/153336" rel="alternate"/>
<author>
<name>Medina, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/153336</id>
<updated>2024-01-17T03:41:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Emergence: Speculative Ecologies  &amp; Evolution in Art
Medina, Alejandro
This thesis explores emergence as a focal point within my art practice. Emergence is the phenomenon through which complex systems exhibit properties and behaviors that are not directly attributable to any of the individual components within a system. Instead, these properties emerge through the (often entangled) relationships and interactions between individual, and often heterogeneous, components of a system. By orienting my work towards emergence, I propose a necessary shift towards an ecological and systems-based understanding of the world, one in which artworks can begin to be imagined in networks of relations and interdependence, doing so as a means of probing new ways of Being in an increasingly complex and entangled world. The thesis presents two frameworks for further exploring emergence, including an understanding of the exhibition as a “speculative ecology” and the different roles that instructions, rule-based systems and contracts could take on in staging evolutionary processes. The ecological framing of the exhibition emphasizes a renegotiation of agency amongst the exhibition’s components, open-over-closed systems and a focus on the integration of life cycles into the work; the use of instructions, rule-based systems and contracts enables the translation and embedding of evolutionary processes as part of the work's conceptualization and execution, aiming to inscribe change and instability as a core element in the work. The thesis draws on references from the fields of art and computation to expand upon historical lineages of thinking, in relation to several works that I have developed during my time at MIT’s program in Art, Culture and Technology (ACT).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Manufacturing Performance to Plan with Predictive Analytics</title>
<link href="https://hdl.handle.net/1721.1/153332" rel="alternate"/>
<author>
<name>Weisberg, Joshua</name>
</author>
<id>https://hdl.handle.net/1721.1/153332</id>
<updated>2024-01-17T03:11:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhancing Manufacturing Performance to Plan with Predictive Analytics
Weisberg, Joshua
Modern manufacturing requires meticulous planning to coordinate tightly wound supply chain activities in the face of disruption. This is especially true for automotive companies, which produce complex products at high rates. Their production planning process involves estimating the demand for various vehicles, determining the most profitable mix of products to meet that demand, and then selecting the production parameters which provide maximum efficiency. All this is done while balancing the the short term demands of a volatile market with the long term implications of capital equipment purchases, staffing changes, and supplier management. From the time of each decision to the day of production, demand may change, supply may be disrupted, and manufacturing performance may fall short of expectations. These uncertainties lead to high error in production plans, which propagates to suppliers, other areas of the business, and future periods. Changes harm stability, efficiency, and thus profitability for all stakeholders. This study shows how predictions of performance can be used to revise a plan, using predictive analytics models trained on the characteristics of the plan. To this end, 480+ features are developed to describe plan characteristics and recent manufacturing performance. Several algorithms are utilized to evaluate the relationship between these features and manufacturing performance to plan, measured by ratios of actual production rate to that planned, and hours actually worked to planned. Results of the best performing features, algorithms, and modeling architectures on out-of-sample manufacturing days in the Post-Covid Era showed Median Absolute Error improvements of 40%-60% over a 3-month lead time and 10%-40% over a 1-month lead time across several production lines. These reductions in error can improve stability such that better decisions can be made. Interpretation of the predictive models can lead to improvements in the factory’s ability to meet demand. Benefactors include customers looking to purchase and receive their desired products, employees needing more day-to-day consistency, and suppliers aiming to maintain a healthy business. The only certainty in operations is uncertainty, making it critical for operations companies to improve their understanding and estimation of their performance to plan.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CO₂ and public health impacts of US residential heating electrification</title>
<link href="https://hdl.handle.net/1721.1/153330" rel="alternate"/>
<author>
<name>Grobler, Carla</name>
</author>
<id>https://hdl.handle.net/1721.1/153330</id>
<updated>2024-01-17T03:20:47Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">CO₂ and public health impacts of US residential heating electrification
Grobler, Carla
US Residential combustion heating is currently estimated to lead to ~10,000 premature mortalities annually due to degraded air quality. Replacement of this combustion heating with electric heating is expected to reduce these impacts by shifting emissions away from population centers to electric generators. However, these benefits have not been assessed. This thesis quantifies the health impacts of replacing residential combustion heating with electric heating in the US due to changes in air quality. In addition, we calculate how such a change would affect fossil CO₂ emissions. We find 99% of the premature mortalities currently attributable to US residential fuel combustion can be prevented through the replacement of combustion with electric air-source heat pumps, with net benefits in every US county. Wood-burning systems alone account for 84% of this benefit, particularly in densely populated areas. However, the reduction in air pollution does not necessarily translate into CO₂ reductions, as the study highlights variations in emissions based on location and electricity grid carbon intensity. Future research will explore different assumptions regarding CO₂ emissions. The thesis concludes that electrification of residential heating offers substantial air quality benefits and potential CO₂ reductions in warmer coastal regions and areas with low grid carbon intensity. However, investment in high-efficiency solutions and further grid decarbonization may be necessary for climate benefits nationwide.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Needles in a Haystack: Perceptions of Deservingness on the Implementation of Harm Reduction Programs in the American Midwest</title>
<link href="https://hdl.handle.net/1721.1/153329" rel="alternate"/>
<author>
<name>David, Lauren A.</name>
</author>
<id>https://hdl.handle.net/1721.1/153329</id>
<updated>2024-01-17T03:41:51Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Needles in a Haystack: Perceptions of Deservingness on the Implementation of Harm Reduction Programs in the American Midwest
David, Lauren A.
The association of opioid abuse with rural, white, working-class individuals ultimately generated sympathy, rather than hatred, in the general political zeitgeist. However, some cases deviated from this pattern by adhering to the common cycle and villainizing individuals with substance use disorder involving opioids, and in no such state was this more prevalent than Indiana and the circumstances of the 2014 Scott County HIV Crisis. First-person interviews and comparative analysis between Ohio, Kentucky, and Indiana revealed that negative moral evaluations of individual behavior contribute to a reticence to implement harm reduction programs, often due to the influence of in-group isolation and the social phenomenon known as “not-in-my-backyard.” Indiana is found to be an outlier even among the Midwestern states in its negative response to opioid epidemic victims due to the continued legacy of three, Indiana-specific historical events and phenomena: the rise and legacy of the Temperance Movement; the development of the Indiana Klan – a subset of the KKK; and the lasting influence of moral evangelism, manifesting in the careers of politicians like Mike Pence. This thesis demonstrates that while Americans, in general, viewed victims of the opioid epidemic as more sympathetic than victims of previous substance use epidemics, in part due to the blame placed by pharmaceutical and medical sectors, citizens of Indiana displayed less sympathy, which helps to explain the slow and minimal response to the Scott County HIV Crisis.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planogram Optimization in Support of Small Format Retail Inventory Management</title>
<link href="https://hdl.handle.net/1721.1/153328" rel="alternate"/>
<author>
<name>Kurtz, Miles</name>
</author>
<id>https://hdl.handle.net/1721.1/153328</id>
<updated>2024-01-17T03:26:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Planogram Optimization in Support of Small Format Retail Inventory Management
Kurtz, Miles
Target is in the midst of building its "stores-as-hubs" capabilities, relying on stores to support in-store shopping and serve as ecommerce fulfillment hubs. To execute this strategy, Target has further expanded its footprint into urban and dense suburban geographies. The stores in these areas, referred to as Small Format stores, have less than half of the square footage compared to a traditional Target location and carry an order of magnitude less SKUs. The dynamics of Target's urban retailing, which are characterized for the first time in this study, require specific inventory strategies to maintain service levels with a smaller product assortment and fewer customer choices. &#13;
&#13;
One metric to measure inventory management is `Fit', which considers an item's risk of generating backroom inventory in stores and the days of expected demand covered. Excess inventory decreases worker productivity, while insufficient inventory is associated with stockouts and lost sales. A mixed-integer linear program is developed to suggest the optimal shelf capacity for each product to maximize Fit. The decision model suggests sacrificing space allocated to high cube items to display more units of smaller items, and provides strong evidence for localizing Small Format assortments. A pilot of 10 test display units (planograms) was set and the effects measured via Synthetic Control Design (SCD). This research is part of a multi-year partnership between Target and MIT and is the first implementation of an in-store intervention.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Zero Defect Manufacturing in Multi-Stage Production Systems</title>
<link href="https://hdl.handle.net/1721.1/153327" rel="alternate"/>
<author>
<name>Lyberger, Taylor</name>
</author>
<id>https://hdl.handle.net/1721.1/153327</id>
<updated>2024-01-17T03:11:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards Zero Defect Manufacturing in Multi-Stage Production Systems
Lyberger, Taylor
Implementation of quality improvement methods in multi-stage production systems is essential to manage and quickly eliminate manufacturing quality defects. Many companies tend to prioritize production speed rather than overall throughput, and are hypothesized to be below the optimal level of investment in quality systems when taking into account the full cost of bad quality. While traditional quality management techniques such as six sigma and process control are still valuable and worthwhile tools, recent advancements in technology offer manufacturers the opportunity to augment this tool set with the use of IoT, big data, and advanced analytics. &#13;
&#13;
This thesis addresses the problem of how to build a modern quality manufacturing system that continuously reduces scrap and defect rates in the production process. The study adapts a zero defect manufacturing framework and applies it to the automotive manufacturing industry. Five key activities, including data collection, data integration, data analytics, process control, and defect mitigation are all found to be essential components in the development of a robust quality improvement infrastructure. The process of applying these framework components in the context of an automotive manufacturer’s production lines sheds light on both technical and operational challenges and benefits of the quality system enhancement process. Other manufacturers may find this analysis to be a relevant use case and template when constructing or making improvements to their own quality management architecture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analytical Graphical Approach for Predicting Ground Conditions in TBM-based Tunneling Construction</title>
<link href="https://hdl.handle.net/1721.1/153322" rel="alternate"/>
<author>
<name>Goncalves Klink, Beatriz</name>
</author>
<id>https://hdl.handle.net/1721.1/153322</id>
<updated>2024-01-17T03:37:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analytical Graphical Approach for Predicting Ground Conditions in TBM-based Tunneling Construction
Goncalves Klink, Beatriz
The present master's thesis addresses the use of Artificial Intelligence (AI) and Machine Learning (ML) algorithms to predict geology based on Tunnel Boring Machine (TBM) data. The use of mechanized tunneling has become frequent over the last decade, and their performance is critical for project management and safety. Numerical simulation methods have become prevalent in predicting TBM performance metrics, and the use of AI/ML techniques for prescient applications using TBM-generated data has become ubiquitous. The current research aims to propose an exploratory look into the correlation between specific TBM parameters and ground conditions. The methodology seeks to classify rings based on three main ground classes: rock, soil, and mixed, through the observation of clear patterns, found to be representative of these ground classes, which are demonstrated. A techno-economic assessment of the current use of AI/ML tools for geology prediction in TBM-based tunneling construction, is also presented, analyzing both the potential and shortcomings of the technology. For the purpose of the study, the Porto Metro project (Portugal) is introduced, used as a case study for the proposed methodology. As the mining and drilling market is projected to almost double from 2020-2030, and with the increasing use of TBMs, improving ground condition prediction is paramount to the advancement of tunneling automation efforts. The present thesis aims to further develop the field and open dialogue on the use and effectiveness of using purely AI/ML modelling methods for this application.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Objective forecasting in East Anglia by use of weather types</title>
<link href="https://hdl.handle.net/1721.1/153192" rel="alternate"/>
<author>
<name>Hunsaker, Leon M.</name>
</author>
<id>https://hdl.handle.net/1721.1/153192</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1953-01-01T00:00:00Z</published>
<summary type="text">Objective forecasting in East Anglia by use of weather types
Hunsaker, Leon M.
Thesis: M.S., Massachusetts Institute of Technology, Department of Meteorology, 1953; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1953-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A ground water problem in the North Shore area, Nova Scotia</title>
<link href="https://hdl.handle.net/1721.1/153190" rel="alternate"/>
<author>
<name>Young, Edward J.
            (Edward Joseph),
            1923-</name>
</author>
<id>https://hdl.handle.net/1721.1/153190</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1950-01-01T00:00:00Z</published>
<summary type="text">A ground water problem in the North Shore area, Nova Scotia
Young, Edward J.
            (Edward Joseph),
            1923-
Thesis: M.S., Massachusetts Institute of Technology, Department of Geology, 1950; Bibliography: leaves 54-55.
</summary>
<dc:date>1950-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation of the epoch state filter.</title>
<link href="https://hdl.handle.net/1721.1/153188" rel="alternate"/>
<author>
<name>Edwards, Joan Annette.</name>
</author>
<id>https://hdl.handle.net/1721.1/153188</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Investigation of the epoch state filter.
Edwards, Joan Annette.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A least squares convergence criterion for nonequilibrium boundary layer solutions.</title>
<link href="https://hdl.handle.net/1721.1/153187" rel="alternate"/>
<author>
<name>Elgin, James Brinson.</name>
</author>
<id>https://hdl.handle.net/1721.1/153187</id>
<updated>2025-10-31T20:12:36Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">A least squares convergence criterion for nonequilibrium boundary layer solutions.
Elgin, James Brinson.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>K-shell ionization in collisions of heavy atoms.</title>
<link href="https://hdl.handle.net/1721.1/153183" rel="alternate"/>
<author>
<name>Eichler, David Steven.</name>
</author>
<id>https://hdl.handle.net/1721.1/153183</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">K-shell ionization in collisions of heavy atoms.
Eichler, David Steven.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The influence of inclusion content on fatigue crack propagation in aluminum alloys.</title>
<link href="https://hdl.handle.net/1721.1/153182" rel="alternate"/>
<author>
<name>El-Soudani, Sami Mahmoud.</name>
</author>
<id>https://hdl.handle.net/1721.1/153182</id>
<updated>2025-10-30T17:51:26Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">The influence of inclusion content on fatigue crack propagation in aluminum alloys.
El-Soudani, Sami Mahmoud.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy and Materials Science, 1972; Vita.; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methods for electronically processing neurological data.</title>
<link href="https://hdl.handle.net/1721.1/153177" rel="alternate"/>
<author>
<name>Eckerle, Joseph Stephen.</name>
</author>
<id>https://hdl.handle.net/1721.1/153177</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Methods for electronically processing neurological data.
Eckerle, Joseph Stephen.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1972; Bibliography: leaf 45.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The effects of debt indexation on the value of the firm</title>
<link href="https://hdl.handle.net/1721.1/153176" rel="alternate"/>
<author>
<name>Hollings, Peter F.</name>
</author>
<author>
<name>Raff, George Joseph.</name>
</author>
<id>https://hdl.handle.net/1721.1/153176</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1975-01-01T00:00:00Z</published>
<summary type="text">The effects of debt indexation on the value of the firm
Hollings, Peter F.; Raff, George Joseph.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1975; Bibliography: leaves 86-87.
</summary>
<dc:date>1975-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of the design, operation and economics of freight containers</title>
<link href="https://hdl.handle.net/1721.1/153122" rel="alternate"/>
<author>
<name>Lappin, Walter William.</name>
</author>
<author>
<name>Westerfeld, Stuart Clarence.</name>
</author>
<id>https://hdl.handle.net/1721.1/153122</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1932-01-01T00:00:00Z</published>
<summary type="text">A study of the design, operation and economics of freight containers
Lappin, Walter William.; Westerfeld, Stuart Clarence.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1932; Includes bibliographical references (leaves 209-218).
</summary>
<dc:date>1932-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A beach resort: club and apartments</title>
<link href="https://hdl.handle.net/1721.1/153117" rel="alternate"/>
<author>
<name>Marshall, Thomas F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153117</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1952-01-01T00:00:00Z</published>
<summary type="text">A beach resort: club and apartments
Marshall, Thomas F.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1952; Bibliography: leaves 47-48.
</summary>
<dc:date>1952-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Life insurance home office employees and their unionization</title>
<link href="https://hdl.handle.net/1721.1/153116" rel="alternate"/>
<author>
<name>Cogswell, Dean Edmund.</name>
</author>
<id>https://hdl.handle.net/1721.1/153116</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1951-01-01T00:00:00Z</published>
<summary type="text">Life insurance home office employees and their unionization
Cogswell, Dean Edmund.
Thesis: M.S., Massachusetts Institute of Technology, Department of Business and Engineering Administration, 1951; Bibliography: leaves 103-105.
</summary>
<dc:date>1951-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A new urban center in Cambridge</title>
<link href="https://hdl.handle.net/1721.1/153113" rel="alternate"/>
<author>
<name>Chalmers, Richard K.</name>
</author>
<author>
<name>Hopper, Thomas P.</name>
</author>
<author>
<name>Kozima, Masashi.</name>
</author>
<author>
<name>Rousos, William B.</name>
</author>
<author>
<name>Vahrenkamp, Donald F.</name>
</author>
<author>
<name>Wulff, Bernard J.</name>
</author>
<id>https://hdl.handle.net/1721.1/153113</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">A new urban center in Cambridge
Chalmers, Richard K.; Hopper, Thomas P.; Kozima, Masashi.; Rousos, William B.; Vahrenkamp, Donald F.; Wulff, Bernard J.
Thesis: M. Arch., Massachusetts Institute of Technology, Department of Architecture, 1964; Includes bibliographies.
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The United States foreign service, a personnel model.</title>
<link href="https://hdl.handle.net/1721.1/153110" rel="alternate"/>
<author>
<name>Emmons, Charles Edward.</name>
</author>
<id>https://hdl.handle.net/1721.1/153110</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">The United States foreign service, a personnel model.
Emmons, Charles Edward.
Thesis: M.S., Massachusetts Institute of Technology, Department of Political Science, 1972; Bibliography: leaves 113-114.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inter-union work assignment disputes under the railway labor act.</title>
<link href="https://hdl.handle.net/1721.1/153108" rel="alternate"/>
<author>
<name>Swartz, William John.</name>
</author>
<id>https://hdl.handle.net/1721.1/153108</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1967-01-01T00:00:00Z</published>
<summary type="text">Inter-union work assignment disputes under the railway labor act.
Swartz, William John.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1967; Bibliography: leaf 80.
</summary>
<dc:date>1967-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evolution of a manufacturing strategy : Apple Computer's Fremont factory</title>
<link href="https://hdl.handle.net/1721.1/153107" rel="alternate"/>
<author>
<name>Gee, Bruce R.</name>
</author>
<id>https://hdl.handle.net/1721.1/153107</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">Evolution of a manufacturing strategy : Apple Computer's Fremont factory
Gee, Bruce R.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1986; Bibliography: leaves 63-67.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of ship speeds along prescribed course under uncertainty</title>
<link href="https://hdl.handle.net/1721.1/153106" rel="alternate"/>
<author>
<name>Foo, Cedric Chee-Keng.</name>
</author>
<id>https://hdl.handle.net/1721.1/153106</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1985-01-01T00:00:00Z</published>
<summary type="text">Optimization of ship speeds along prescribed course under uncertainty
Foo, Cedric Chee-Keng.
Thesis: M.S., Massachusetts Institute of Technology, Department of Ocean Engineering, 1985; Includes bibliographical references.
</summary>
<dc:date>1985-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Heat of formation of some ferro-calcic singulo-silicates</title>
<link href="https://hdl.handle.net/1721.1/153105" rel="alternate"/>
<author>
<name>Wen, Ching Yu,&#13;
            1881-</name>
</author>
<id>https://hdl.handle.net/1721.1/153105</id>
<updated>2025-01-18T02:15:48Z</updated>
<published>1908-01-01T00:00:00Z</published>
<summary type="text">Heat of formation of some ferro-calcic singulo-silicates
Wen, Ching Yu,&#13;
            1881-
Thesis: M.S., Massachusetts Institute of Technology, Dept. of Mining Engineering and Metallurgy, 1908; MIT Institute Archives copy has the following paper bound with thesis: Design of plant for smelting and converting a sulphide copper ore, by C.Y. Wen. 1909. (29 leaves, [1] leaf of plates : ill.; 27 cm.).; Includes bibliographical references.
</summary>
<dc:date>1908-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Observations of the Upper Ocean from Autonomous Platforms during the Passage of Extratropical Cyclone Epsilon (2020)</title>
<link href="https://hdl.handle.net/1721.1/153102" rel="alternate"/>
<author>
<name>Zimmerman, Michael T.</name>
</author>
<id>https://hdl.handle.net/1721.1/153102</id>
<updated>2023-12-01T03:23:43Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Observations of the Upper Ocean from Autonomous Platforms during the Passage of Extratropical Cyclone Epsilon (2020)
Zimmerman, Michael T.
Hurricane Epsilon (2020) was a late-season, category-3 tropical cyclone that underwent extratropical transition and became Extratropical Cyclone Epsilon on 26 October. The upper ocean response to the passage of the storm was observed by three types of autonomous platforms: the eXpendable Spar buoy, the Air-Launched Autonomous Micro Observer profiling float, and two Seagliders. Taken together, this array enabled the rare collection of contemporaneous observations of the upper ocean, air-sea interface, and atmospheric boundary layer before, during, and after the passage of the storm. The evidence presented highlights how Extratropical Cyclone Epsilon broke down the residual North Atlantic summer stratification regime and accelerated the shift to the period of prolonged ocean cooling associated with winter. The significance of the synergistic capabilities of the array is two-fold: 1) comparing observations of the same parameters, taken from different platforms, enables a comprehensive approach to better understanding how storm-induced momentum, sensible heat, and moisture fluxes input kinetic and near-inertial energy into the ocean and thereby alter upper ocean structure; and 2) future, targeted deployments of similarly capable observational arrays will reduce the uncertainty of tropical and extratropical cyclone intensity forecasts by facilitating the assimilation of real-time subsurface ocean data into coupled numerical prediction models.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to be Satisfied with Less-Than-Perfect Finish</title>
<link href="https://hdl.handle.net/1721.1/153101" rel="alternate"/>
<author>
<name>Park, Hyun Woo</name>
</author>
<id>https://hdl.handle.net/1721.1/153101</id>
<updated>2023-12-01T03:07:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How to be Satisfied with Less-Than-Perfect Finish
Park, Hyun Woo
Contemporary (Western) society runs on the ideology of projected continuous growth. Capital works as food for growth, by placing the virtue on the constant and focused effort of production, followed by consumption. Creative industries such as the art and design field are not an exception. But in recent years it has become more and more clear that continued economic stability will not be possible at least with the same mode of operation in making things. It is more relevant than ever to look for new approaches in engaging the practice of creative making. How can one engage in the practice of creative making in the precarious world of current era?&#13;
&#13;
In this paper, I navigate through various activities that I performed while at MIT and weave through them in a methodology of embracing contingency, precarity, and friction to demonstrate a novel way of creative making practice with materiality and material agency becoming an active player within the practice.&#13;
&#13;
I reevaluate the preconceived notion of material-based art and design, as well as the common practice of relentless production of novel objects in negligence of their object or material agency. The methodology of reappropriating material and the way of its fabrication unfolds in an exploratory manner, unsurprisingly often with a novel approach of precarious, ad-hoc, and even seemingly haphazard ways.&#13;
&#13;
Additionally, this paper will be in such fashion as a logbook and should serve as a reference for the future self and art and design practitioners alike. I propose that the questions and trials that this paper presents would be to help for someone who hopes to escape from the immobilization by the feeling of their own practice going nowhere from the very practice that they have carried on.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Digital Tools and Design: Improving Participation in Policymaking</title>
<link href="https://hdl.handle.net/1721.1/153100" rel="alternate"/>
<author>
<name>Jeong, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/153100</id>
<updated>2023-12-01T03:27:00Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Digital Tools and Design: Improving Participation in Policymaking
Jeong, Sarah
This thesis examines how digital tools and design principles can be used to improve public participation in policymaking. I begin by identifying the problem that government consultations often fail to engage the public in policymaking because of their inaccessibility. I then explore ways to make government consultations more accessible and engaging, taking findings from: a literature review; interviews with policy practitioners; and case studies of real-world consultations that were effective in engaging the public. I apply these learnings to design and conduct an online survey as an alternative to the typical form of government consultation, using a recent New Zealand consultation on recycling as my comparator. The thesis evaluates the results of my survey and concludes with implications for incorporating digital tools and design principles into the consultation process.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hedging a Falling Knife: Investing Through the Post Covid-19 Dallas-Fort Worth Housing Correction Utilizing Real Options Strategies</title>
<link href="https://hdl.handle.net/1721.1/153099" rel="alternate"/>
<author>
<name>Gietema III, William Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/153099</id>
<updated>2023-12-01T03:35:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hedging a Falling Knife: Investing Through the Post Covid-19 Dallas-Fort Worth Housing Correction Utilizing Real Options Strategies
Gietema III, William Alexander
Historically and in past housing cycles, the Dallas-Fort Worth (DFW) housing market has maintained remarkable price stability relative to the broader U.S. housing market. Consistent population and employment growth, combined with abundant developable land, have created the ideal environment for developers and homebuilders to achieve the stable production of new single-family housing. In the face of rapid population growth, the high elasticity of housing supply in DFW has enabled the region to maintain housing affordability relative to other major U.S. housing markets.&#13;
&#13;
The Covid-19 pandemic was the second “once in a century” event to occur in the 21st century, the other being the 2008 Global Financial Crisis. Lock downs, work from home, low interest rates, and inflation characterized the supply and demand shocks that followed Covid-19 and produced a rapid escalation of home prices previously uncharacteristic to DFW fundamentals. &#13;
&#13;
This thesis analyzes the impact and sustainability of Covid-19 supply and demand shocks on the DFW housing market and its participants. Focus is placed on the relationship between homebuilders, developers, and lenders in the event of housing correction due to rising interest rates and oversupply. Through the analysis of market fundamentals and structure and in the event of broader market decline, this thesis proposes an investment strategy based on the acquisition of distressed single-family lot developments. The investment strategy leverages real options theories of project delay and product switching to mitigate the risk of catching a falling knife.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Optical Imaging and Image Processing to Verify a Layer&#13;
in a Laser Powder Bed Fusion Process</title>
<link href="https://hdl.handle.net/1721.1/153098" rel="alternate"/>
<author>
<name>Kota, Maya Padmini</name>
</author>
<id>https://hdl.handle.net/1721.1/153098</id>
<updated>2023-12-01T03:36:55Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Using Optical Imaging and Image Processing to Verify a Layer&#13;
in a Laser Powder Bed Fusion Process
Kota, Maya Padmini
Additive manufacturing (AM) allows for the creation of complex geometries that cannot be created with traditional manufacturing methods. AM is widely used in regulated industries such as medical and aerospace which require objective evidence of good manufacturing processes (GMP) for auditing purposes. Within AM, important powder layer characteristics must be met to ensure final part quality. Currently, no machine can provide objective evidence of a proper characterization of crucial powder layer properties with in-process monitoring equipment. Such properties are currently verified by unquantifiable means and can be classified within two categories of failure. This project investigates and analyzes possible sensor technologies that can provide in-process data to objectively quantify the characterization condition. Implementing in-process monitoring technologies will provide objective, quantitative evidence, prevent failed builds due to improper powder layer setups, and reduce the time it takes to set up an AM machine for a build. While the final solution for this project incorporates the use of both a 2D laser line sensor and an AM in-machine camera, this thesis will specifically focus on the in-machine camera. More specifically, this thesis will discuss camera repeatability tests that were conducted, the images taken during these tests, and the resulting pixel intensity values from these images. Analysis of the intensity values demonstrated that the in-machine camera could distinguish between different powder layer thickness values and that intensity values could be used as a quantitative metric to indicate whether certain powder layer characteristics are within specification.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Production Network Capacity Modeling for Strategic Network Planning</title>
<link href="https://hdl.handle.net/1721.1/153097" rel="alternate"/>
<author>
<name>Simons, Philipp</name>
</author>
<id>https://hdl.handle.net/1721.1/153097</id>
<updated>2023-12-01T03:31:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Production Network Capacity Modeling for Strategic Network Planning
Simons, Philipp
Strategic planning of manufacturing capacity requires data-based approaches to determine current and future constraints in a manufacturing network. While oftentimes, the desire to improve decision-making in strategic planning is strong among decision makers, and data on capacity generally exists in some form, there can be a lack of centrally coordinated efforts to harvest existing data as well as high degrees of inconsistency. In addition, modeling manufacturing capacity is an inherently complex problem due to varying modes of production, unclear units of measure, and complex global manufacturing networks. &#13;
&#13;
In this thesis, a capacity model design is proposed for a global medical device manufacturer, and key aspects of the model functionality are demonstrated in a case study. At the core of the capacity model is a database structure using standardized data fields for capacity and demand data, including cycle times, shift structure, and space. The logic of the capacity model is developed, with the goal to capture supply chain complexities such as mixed model lines or various degree of automation. In short, the logic determines the required production time for the product portfolio under consideration, and assesses the available capacity by comparing this required production time with the total available time. &#13;
&#13;
The logic is tested on a prototype product with a focus on mixed model lines. It is found that naming and product grouping inconsistencies require significant manual data manipulation, which – in combination with a lack of standardized, centrally available data – will form the biggest bottleneck in the implementation of the capacity model. Finally, an implementation roadmap is presented to offer guidance on converting the logic presented here into a functional model for decision makers in a supply chain strategy organization.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Playful Occupations: Mobile Creative Coding for Critical Consciousness</title>
<link href="https://hdl.handle.net/1721.1/153095" rel="alternate"/>
<author>
<name>Xisto, Thaís</name>
</author>
<id>https://hdl.handle.net/1721.1/153095</id>
<updated>2023-12-01T03:00:56Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Playful Occupations: Mobile Creative Coding for Critical Consciousness
Xisto, Thaís
The transformative potential of technology is often championed as a catalyst for societal progress, offering pathways to address challenges and create more inclusive futures. Despite this optimistic perspective of technology as a force for positive change, it often falls short of expectations. As the 21st century unfolds there is growing interest and investment in equipping individuals with computational skills so that they can navigate and further shape our increasingly digitally-mediated world. How can we design computational learning environments so that they not only empower individuals with technical proficiency but also foster the critical thinking, agency, and socio-cultural awareness necessary to fully realize the revolutionary potential of technology? This thesis looks to the Brazilian educator and philosopher Paulo Freire’s concept of conscientização (critical consciousness) as a lens through which we can explore this question.&#13;
&#13;
Throughout this research project, I collaborated with the Homeless Workers’ Movement in Brazil (MTST for short). Freire’s concept of critical consciousness as the ability to intervene in one’s reality in order to change it is central to the movement’s grassroots mobilizations and political education. Using a combination of Participatory Action Research and Social Design Experimentation approaches, we co-designed and implemented a series of creative coding workshops and a projects guide tailored to MTST’s community. These computational learning experiences centered on OctoStudio, a mobile programming app being developed by the Lifelong Kindergarten Group.&#13;
&#13;
What insights about computational literacy might we reach by incorporating critical consciousness into computing education? How can we cultivate critical consciousness through creative coding learning experiences? This thesis investigates these questions while also describing how researchers and communities can collaborate more equitably to create meaningful change in the educational circumstances of marginalized groups. Otherwise, technology might not serve as a tool for empowerment and societal progress but as another mechanism to preserve existing systems of marginalization.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Electrochemical Approaches to&#13;
Deep-Decarbonization</title>
<link href="https://hdl.handle.net/1721.1/153090" rel="alternate"/>
<author>
<name>Badel, Andres F.</name>
</author>
<id>https://hdl.handle.net/1721.1/153090</id>
<updated>2023-12-01T03:11:06Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Low-Cost Electrochemical Approaches to&#13;
Deep-Decarbonization
Badel, Andres F.
Though numerous efforts have been made to mitigate the impact of global warming, deep decarbonization of the world's largest sources of CO₂ emissions is proving increasingly necessary. Progress in many sectors is proceeding quickly, but we have so far failed to address all sources of industrial emissions. Aviation, long-distance shipping, load-following electricity, and steel, iron, and cement production together account for 27% of emissions and are considered hard-to-decarbonize sectors. We demonstrate and analyze several approaches using electrochemistry in an attempt to address two of these hard-to-decarbonize sectors: cement production and load-following electricity.&#13;
&#13;
For cement production, a novel approach to drive the decarbonation of calcium carbonate using neutral water electrolysis is proposed. This approach also generates concentrated gas streams of H₂ and O₂/CO₂. The fine powder Ca(OH)₂ that is generated in the reactor is then used to synthesize the majority cementitious phase in cement. Approaches to use the concentrated gas streams from this process may be used synergistically with other processes under development for a decarbonized energy economy, suggesting a pathway to cost-competitive emissionless cement manufacturing wherein all energy is supplied by renewable electricity.&#13;
&#13;
For load-following electricity, an evaluation of metal-air batteries is first performed that provides a roadmap of the scale of cost reductions that might be accessible by 2050. We find that because metal-air batteries for grid energy storage are based on low-cost materials, system-level energy costs are low. However, we also find metal-air batteries currently suffer from performance and cost characteristics that prevent wide-scale deployment. Should these be addressed, we find that the cost of ownership for long-duration metal-air batteries is projected to become lower than $100/kWh.&#13;
&#13;
Drawing on the need for low-cost energy storage, a novel battery architecture that uses abundant chemicals separated by two immiscible phases is demonstrated. This self-assembling Zn-Cl₂ battery takes advantage of the immiscible nature of aqueous solutions and non-polar solvents. This system shows an inverse relationship between temperature and energy density that allows for low chemical cost while simultaneously exhibiting high energy density, reaching roughly $2/kWh and 700 Wh/L.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Reduced Order Modeling of a Rocket Engine Turbopump Inducer for Assessment of Pogo Instability</title>
<link href="https://hdl.handle.net/1721.1/153089" rel="alternate"/>
<author>
<name>Hussein, Mennatallah</name>
</author>
<id>https://hdl.handle.net/1721.1/153089</id>
<updated>2023-12-01T03:01:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Reduced Order Modeling of a Rocket Engine Turbopump Inducer for Assessment of Pogo Instability
Hussein, Mennatallah
Cavitation in liquid rocket engine turbopump inducers can lead to system instabilities. One form of these flow oscillations is so-called "pogo instability", in which dynamic instability arises from the interaction of vehicle structural dynamics with thrust oscillations of a liquid fueled propulsion system. These thrust oscillations are a result of the pressure fluctuations that originate in the piping system, the injector, the combustion chamber, and/or the turbopump inducers when cavitating. The main challenges associated with pogo analyses are the complexity of fluid-structure coupling, a disparity in findings about mechanisms for cavitation onset, sparse amount of data on inducer dynamic behavior. This research presents a modular approach for dynamical system modeling capturing structural dynamics due to viscous shear to study their effect on a cavitating dynamics and overall pogo instabilities. An existing cavitating inducer model is extended to include the effect of viscous shear on the piping structure, and is then integrated in a simple rocket engine feedline model to characterize pogo instability with a direct link to changes in operating conditions and design choices. The open loop system analysis captures the effect of viscous shear on cavitation surge. This dissipating mechanism adds damping and stabilizes the system. The closed loop system analysis demonstrates that the reduced order model is capable of assessing the effect of viscous shear induced structural vibrations on overall system stability. Based on these ideas, this work sets the stage for pogo analysis of more complicated configurations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Multi-Agent Decision Making Under Uncertain Communication</title>
<link href="https://hdl.handle.net/1721.1/153081" rel="alternate"/>
<author>
<name>Pittman, Cameron W.</name>
</author>
<id>https://hdl.handle.net/1721.1/153081</id>
<updated>2023-12-01T03:48:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Distributed Multi-Agent Decision Making Under Uncertain Communication
Pittman, Cameron W.
As space exploration accelerates and the number of robots and humans working in extreme environments grows with it, we must enact autonomous multi-agent coordination in order to safely operate in environments that are inherently hostile to communication. To the best of our knowledge, there are no multi-agent scheduling algorithms capable of independently reasoning over communication delay. A key gap that must be addressed is a single-agent scheduler capable of deciding when to act given uncertain observation, which can the form the basis for distributed multi-agent scheduling. Existing research has provided insights into temporal reasoning, namely modeling observation uncertainty and scheduling events with temporal constraints. There is both a need for deciding when to schedule events when there is uncertain observation delay, and a need to robustly coordinate between agents. Scheduling events in the face of uncertainty is a challenge due to the compounding uncertainties of uncontrollable exogenous events, unknown observation delay, and uncertain communication between agents. This thesis puts forth a series of contributions that culminates in the demonstration of a robust single-agent task executive that used our scheduler to coordinate in a multi-agent context despite observation delay. Doing so required insights in checking controllability of temporal constraints with uncertain delay, defining a scheduler that is robust to uncertain observation delay, integrating the scheduler in an existing high-level task executive, and a coordination strategy for multiple agents. We show that the scheduler exhibits the expected performance characteristics, and perform laboratory demonstrations of multi-agent execution with uncertain communication using a scenario inspired by human spaceflight.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Dislocation Behavior in High Entropy Alloys Using Atomistic Simulations</title>
<link href="https://hdl.handle.net/1721.1/153079" rel="alternate"/>
<author>
<name>Oh, Changhwan</name>
</author>
<id>https://hdl.handle.net/1721.1/153079</id>
<updated>2023-12-01T03:52:10Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Investigating Dislocation Behavior in High Entropy Alloys Using Atomistic Simulations
Oh, Changhwan
High-entropy alloy (HEA) is a new alloying strategies involving multi-principal elements in near equiatomic proportions.[39, 11, 37, 19, 41, 13] To fully understand and tune the mechanical properties and crystal plasticity of the alloys, it is necessary to investigate the dislocation behavior[15]. The NiCoCr system is reported to have a single-phase face-centered cubic (FCC) crystal structure with enhanced mechanical properties compared to conventional alloys. Its negative stacking fault energy and high yield strength allows unique dislocation behavior. Also, the annealing temperature of NiCoCr system leads to a wide range of short range orders which directly affect the energy barrier of dislocation movement.[22] This work investigates the flow stresses in various systems under constant strain rate and the relationship between partial dislocation behavior and stacking fault energy of NiCoCr system.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Photonic Spectroscopy: Applying the Digital Fourier-Transform Spectrometer</title>
<link href="https://hdl.handle.net/1721.1/153077" rel="alternate"/>
<author>
<name>Micale, Gillian K.</name>
</author>
<id>https://hdl.handle.net/1721.1/153077</id>
<updated>2023-12-01T03:34:47Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Integrated Photonic Spectroscopy: Applying the Digital Fourier-Transform Spectrometer
Micale, Gillian K.
The digital Fourier-Transform (dFT) spectrometer is a promising on-chip spectrometer architecture that offers exponential scaling for resolution with a compact device footprint. A package of scripted modules employs object-oriented programming to automate creating the mask layout and streamline the dFT design process. Moving towards longer infrared wavelengths with broadband devices expand the sensing capabilities by accessing stronger chemical absorption signatures associated with the fingerprint regime. The second generation of dFT devices realizes two high-resolution, 1024-channel spectrometers. The first device operates around 1550 nm and fully utilizes foundry standard components and processes. The second device achieves half-octave operation between 1620 - 1750 nm with the use of custom broadband adiabatic couplers. The next set of designs push beyond the telecom range, combining two dFT devices on a single chip for 1.2 - 2.4 µm operation. Ultrabroadband single-mode waveguides and custom adiabatic couplers were designed for each device on this chip. All four of the discussed designs use the SOI material platform and are compatible standard foundry processes.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sodium-Ion Battery Cathode Active Material Cost Drivers and Manufacturing Scale-up Barriers</title>
<link href="https://hdl.handle.net/1721.1/153072" rel="alternate"/>
<author>
<name>Clingman, Brooks T.</name>
</author>
<id>https://hdl.handle.net/1721.1/153072</id>
<updated>2023-12-01T03:46:40Z</updated>
<published>2022-05-01T00:00:00Z</published>
<summary type="text">Sodium-Ion Battery Cathode Active Material Cost Drivers and Manufacturing Scale-up Barriers
Clingman, Brooks T.
Energy storage can mitigate challenges posed by intermittent renewable generation. Non-hydro energy storage is currently dominated by lithium-ion batteries, but cost and materials supply are concerns. Sodium is more abundant and cheaper to mine and refine than lithium, positioning sodium-ion batteries to be a potential grid storage solution. However, researchers working at the lab-scale have yet to build consensus around the best sodium-ion battery candidates for commercialization. Cathode active materials (CAMs) are of particular interest because of the pivotal role they play in battery performance and cost. Because of the material class’s simple structure, straightforward synthesis, and potential scalability, layered metal oxides (LMOs) are a particularly promising CAM under study. This thesis investigates the cost drivers and scale-up barriers of LMOs. Processes and equipment considerations influencing scale-up are probed through interviews with experts in industry and academia, and materials and process properties driving the design of critical equipment are identified. A process-based cost model is utilized to investigate the impact of synthesis route on CAM costs at scale, and the materials to total cost fraction for LMOs is found to be significantly lower than that of lithium-ion batteries.
</summary>
<dc:date>2022-05-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flooded with Possibilities: Analyzing Flood Insurance as a Catalyst for Development in Southeast Florida</title>
<link href="https://hdl.handle.net/1721.1/153049" rel="alternate"/>
<author>
<name>Mejia Martinez, Carlos Augusto</name>
</author>
<id>https://hdl.handle.net/1721.1/153049</id>
<updated>2023-11-28T03:43:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Flooded with Possibilities: Analyzing Flood Insurance as a Catalyst for Development in Southeast Florida
Mejia Martinez, Carlos Augusto
Florida stands as one of the most critical residential markets in the United States, with residential sales reaching an impressive $468.5 billion and real estate residential investment amounting to $6.8 billion in 2022. However, the question arises whether this seemingly perpetual growth can withstand the tightening flooding policies. Is the residential market immune to the decisions made by insurance companies and the National Flood Insurance Program (NFIP)? These uncertainties form the basis of this thesis, which delves into the factors influencing insurance premium rates in Miami-Dade and Broward Counties, with a specific focus on geographic factors and independent variables.&#13;
&#13;
Through the utilization of regression models, incorporating data from First Street Foundation and the US Census Bureau, the study analyzes the intricate relationship between these variables and premium rates. A key finding is the pivotal role played by geographic factors, particularly census tracts, in accurately predicting and comprehending premium rates. The inclusion of census tract data enhances accuracy and data normalization. &#13;
&#13;
Moreover, several independent variables, such as flood risk, property values, mortgages, and rental affordability, emerge as significant influencers of premium rates. Time series data analysis reveals a steady upward trajectory in premium rates over time, accentuating the urgency for proactive measures in addressing the surge in insurance costs.&#13;
&#13;
The research further identifies residential arbitrage opportunities, whereby developers can strategically acquire land in areas disproportionately affected by high premium rates. Approximately 15% of single-family homes within the census tracts of Broward and Miami-Dade Counties pay double the insurance dollar value compared to their peers in areas with similar characteristics, as depicted by FEMA. By considering demographic characteristics and purchasing power parity, developers can navigate the evolving real estate market and contribute to sustainable urban development.&#13;
&#13;
These valuable insights into the factors influencing insurance premium rates open avenues for future research. Expanding the analysis to other geographic areas, incorporating additional variables, assessing the impact of climate change, and analyzing the effectiveness of mitigation measures are all potential directions for further exploration. Ultimately, this research sheds light on the intricate dynamics of insurance premium rates and paves the way for more informed decisions in the realms of residential real estate and urban development.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Abortion Beyond the Binary: Transgender people have historically been left out of abortion and reproductive health research. Now, two researchers are bringing their experiences to light.</title>
<link href="https://hdl.handle.net/1721.1/153042" rel="alternate"/>
<author>
<name>Jacobs, Phie</name>
</author>
<id>https://hdl.handle.net/1721.1/153042</id>
<updated>2023-11-28T03:53:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Abortion Beyond the Binary: Transgender people have historically been left out of abortion and reproductive health research. Now, two researchers are bringing their experiences to light.
Jacobs, Phie
When it comes to accessing abortions and other reproductive healthcare, transgender people throughout the United States face a minefield of issues—from getting insurance coverage to dealing with medical providers who don’t know how to treat them, to weathering discrimination—but very little research exists on how bad these problems are, the impacts they have, or potential solutions. Currently, only a few national-level studies have investigated how trans people experience the US healthcare system, and no major studies measure the number of trans people who undergo abortions, the type of abortions they receive, or the challenges they face when accessing these services. The few studies that do exist suggest that, due to myriad legal, financial, and social barriers, trans people often struggle to obtain the healthcare services they need.&#13;
&#13;
In 2017, this knowledge gap spurred Heidi Moseson and Sachiko Ragosta, two public health researchers at Ibis Reproductive Health in Oakland, California, to begin developing the first national-level survey into the reproductive healthcare experiences of trans Americans. The survey, which ended data collection in 2019 and is still in the analysis phase, included input from more than 3,000 transgender and nonbinary respondents. The project is unprecedented in terms of size, scope, and specificity, and is currently the only major study in this field that was designed with consultation from those within the trans community and is led by a scientist who is gender diverse themself—Ragosta identifies as nonbinary and uses they/them pronouns.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Under Their Own Laws: How the Kitasoo/Xai’xais First Nation created a new marine protected area – without the federal government’s approval</title>
<link href="https://hdl.handle.net/1721.1/153039" rel="alternate"/>
<author>
<name>von Herff, William</name>
</author>
<id>https://hdl.handle.net/1721.1/153039</id>
<updated>2023-11-28T03:52:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Under Their Own Laws: How the Kitasoo/Xai’xais First Nation created a new marine protected area – without the federal government’s approval
von Herff, William
On June 21, 2022, the Kitasoo/Xai’xais, a First Nation on the Pacific coast of Canada, unilaterally declared the Gitdisdzu Lugyeks marine protected area (MPA) in their territorial waters of Kitasu Bay. Whether they have the legal authority to create that protected area, however, is a difficult question to answer. The Constitution Act of Canada protects Indigenous people’s fundamental rights to fishing, logging, and land, but technically they remain subjects of the Canadian government. For the Kitasoo/Xai’xais this system is especially frustrating, since like many other Pacific coast nations, they have never signed a treaty with the Canadian government. &#13;
&#13;
The eventual goal, then, for the new MPA is to reach a co-management agreement, where the Kitasoo/Xai’xais and Canadian government establish overlapping MPAs in Kitasu Bay and share authority over the bay’s resources. The Kitasoo/Xai’xais have their traditional knowledge and holistic understanding of their territory that is needed to protect and manage Kitasu Bay. Meanwhile, the Canadian government has far-reaching political power and a national perspective that the Kitasoo/Xai’xais lack. Combining these assets could do great things for the bay. By declaring their MPA, the Kitasoo/Xai’xais are, in a sense, just getting a head-start on this process. They still want the federal government involved, after all. They just felt that they couldn’t wait any longer. &#13;
&#13;
The Kitasoo/Xai’xais have been fighting for decades to keep their environment intact. They have had to use every tool at their disposal – protests, lawsuits, and industry alliances – to maintain their way of living. Now, the Gitdisdzu Lugyeks MPA represents a new opportunity: if the Canadian government comes to the table, the Kitasoo/Xai’xais will have a renewed chance to safeguard their resources under their own laws and practices, just as they did before European colonization. They are using a vast wealth of traditional knowledge, bolstered by decades of their own scientific research, to guide their management practices and ensure their waters and resources will still be there for generations to come. The Kitasoo/Xai’xais, however, are striving for something bigger than themselves: they believe this MPA can demonstrate the power of Indigenous-led conservation both in Canada and around the world.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Sleepless Forest Observers : Ecologists are using remote observation to advance their understanding of environments. Are they losing something in the process?</title>
<link href="https://hdl.handle.net/1721.1/153032" rel="alternate"/>
<author>
<name>Nalamalapu, Vishva</name>
</author>
<id>https://hdl.handle.net/1721.1/153032</id>
<updated>2023-11-28T03:58:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Sleepless Forest Observers : Ecologists are using remote observation to advance their understanding of environments. Are they losing something in the process?
Nalamalapu, Vishva
In recent years, camera traps, acoustic recorders, genetic methods to identify organisms using DNA they shed into their environments (eDNA), tags on animals to log their behaviors, and aircraft or satellite remote sensing to identify environments and species have all become less expensive, and the quality of sensors and the methods to analyze their data have improved. As a result, ecologists are using remote observation more and more in their research. “The explosion is happening now,” says Taal Levi, an ecologist at OSU who studies quantitative wildlife ecology, conservation, and environmental genetics at the Andrews. A review article published in Frontiers in Ecology and Evolution found that the number of scientific publications with the keyword “eDNA” tripled from 2015 to 2018, the number with the keyword “camera traps” doubled, and the number with the keyword “bioacoustics” increased by 50%.&#13;
&#13;
There are good reasons for this shift. Remote sensing can help researchers learn about ecosystems. Because sensors don’t always need someone physically present, researchers can use them to collect data at larger and finer scales and in places that are difficult to observe directly. Sensors can also detect a wider range of organisms than traditional methods. Levi says these technologies are like direct observation “but instead of just you, you’ve got 5,000 versions of you that can stay awake all night long.”&#13;
&#13;
Simultaneously, researchers spend less time in the field when they use remote observation. And it is in the field where they often come up with research ideas and develop a deeper intuition for an ecosystem. Remote observation can also encourage the trend of finding patterns (that an animal lives in environments with specific characteristics, for example) without learning what causes those patterns (which of those characteristics are important to the animal and why).&#13;
&#13;
The Andrews is one place of many where the explosion of remote observation is happening. It was established as a site for long-term science and management studies by the Forest Service in 1948 and designated one of the first of 28 National Science Foundation funded Long-Term Ecological Research (LTER) Network sites in 1980. LTER Network sites focus on long-term and large-scale ecological processes. As a result, the Andrews has a long history of research on forests, streams, and watersheds, which makes it an especially good place to assess the transition from traditional methods to remote observation. At the Andrews, researchers are trying to get the benefits of remote observation while avoiding the risks and to find a balance between remote observation and traditional methods. That requires being intentional within the fast-paced broader culture of scientific research. Their success determines the novelty, completeness, and accuracy of their research, which in turn influences how society understands and manages its environments.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Softwar: A Novel Theory on Power Projection and the National Strategic Significance of Bitcoin</title>
<link href="https://hdl.handle.net/1721.1/153030" rel="alternate"/>
<author>
<name>Lowery, Jason P.</name>
</author>
<id>https://hdl.handle.net/1721.1/153030</id>
<updated>2023-11-28T03:27:36Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Softwar: A Novel Theory on Power Projection and the National Strategic Significance of Bitcoin
Lowery, Jason P.
Current analysis of Bitcoin’s underlying proof-of-work technology is almost exclusively based on financial, monetary, or economic theory. Recycling the same theoretical frameworks when performing hypothesis-deductive analysis of Bitcoin has the potential to create systemic-level analytical bias which could negatively impact public policy making efforts and could even pose a threat to US national security.&#13;
&#13;
This thesis introduces a novel theoretical framework for analyzing the potential national strategic impact of Bitcoin as an electro-cyber security technology rather than a peer-to-peer cash system. The goal of this thesis is to give the research community a different frame of reference they can utilize to generate hypotheses and deductively analyze the potential risks and rewards of proof-of-work technologies as something other than strictly monetary technology. The author asserts it would be beneficial for researchers to explore alternative functionality of proof-of-work technologies to eliminate potential blind spots, provide a more well-rounded understanding of the risks and rewards of proof-of-work protocols like Bitcoin, and positively contribute to the development of more informed public policy in support of the March 2022 US Presidential Executive Order on Ensuring the Responsible Development of Digital Assets and the May 2022 US Presidential Executive Order on Improving the Nation’s Cybersecurity.&#13;
&#13;
Utilizing a grounded theory methodology, the author combines different concepts from diverse fields of knowledge (e.g. biology, psychology, anthropology, political science, computer science, systems security, and modern military strategic theory) to formulate a novel framework called “Power Projection Theory.” Based on the core concepts of Power Projection Theory, the author inductively reasons that proof-of-work technologies like Bitcoin could not only function as monetary technology, but could also (and perhaps more importantly) function as a new form of electro-cyber power projection technology which could empower nations to secure their most precious bits of information (including but not limited to financial bits of information) against belligerent actors by giving them the ability to impose severe physical costs on other nations in, from, and through cyberspace. The author calls this novel power projection tactic “softwar” and explores its potential impact on national strategic security in the 21st century. Like most grounded theory research efforts, the primary deliverable of this thesis is a novel theory rather than deductive analysis of a hypothesis derived from existing theory.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanisms Underlying Learning Mediated Plasticity in the Adult Mammalian Olfactory Bulb</title>
<link href="https://hdl.handle.net/1721.1/153029" rel="alternate"/>
<author>
<name>McCue, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/153029</id>
<updated>2023-11-28T03:20:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mechanisms Underlying Learning Mediated Plasticity in the Adult Mammalian Olfactory Bulb
McCue, Margaret
The olfactory system is rapidly modified by learning, starting at the first informational relay station. The olfactory bulb retains high levels of plasticity throughout adulthood and undergoes lasting structural changes following classic learning paradigms. Recent studies are starting to elucidate the mechanisms that underlie these high levels of rapid, flexible, and persistent change. This review will first discuss the anatomy and basic coding of the olfactory bulb to provide a basis for understanding the fundamental processes of the system. It will then discuss recent breakthroughs in understanding the mechanisms of learning mediated plasticity in the olfactory bulb.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Treating Brackish Groundwater for Irrigation with Selective Electrodialysis &amp; Nanofiltration</title>
<link href="https://hdl.handle.net/1721.1/152963" rel="alternate"/>
<author>
<name>Heath, Samuel M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152963</id>
<updated>2023-11-14T03:08:57Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Treating Brackish Groundwater for Irrigation with Selective Electrodialysis &amp; Nanofiltration
Heath, Samuel M.
The Campo de Cartagena aquifer in southeastern Spain contains brackish groundwater that is considered low quality since it requires treatment prior to economic use. For example, the groundwater contains high concentrations of monovalent ions (Na⁺ and Cl⁻), which are detrimental to crop growth and must be removed before irrigation. Currently, the most widely used technology to remove the monovalent ions from water is reverse osmosis (RO) desalination, but this technology also removes divalent ions that are beneficial to crop growth (Mg²⁺, Ca²⁺, and SO₄²⁻). In this study, two technologies, selective electrodialysis (SED) and nanofiltration (NF), are evaluated to treat the brackish groundwater as an alternative to RO. Unlike RO, both SED and NF can remove monovalent ions from the brackish groundwater feed stream while retaining the divalent ions, producing an irrigation product rich in nutrients.&#13;
&#13;
Using a bench-scale experimental setup for each water treatment method, the monovalent-divalent selective performance of commercial SED and NF membranes are quantified and compared, using a feed stream representative of the water present in the Campo de Cartagena aquifer. Specifically, the pH and total dissolved solids (TDS) of the feed stream are varied to experimentally optimize the monovalent-divalent selectivity for each process. In addition to comparing the membrane performance, this thesis also evaluates practical considerations in the implementation of both technologies. The primary results of this work show that both SED and NF have potential as technically feasible alternatives to RO, but further analysis is needed to determine the economic feasibility of these two processes for this application. NF and SED have the potential to produce a nutrient rich irrigation product, ultimately creating less waste and saving farmers money on fertilizer.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Error in a Model&#13;
Predictive Irrigation Controller</title>
<link href="https://hdl.handle.net/1721.1/152952" rel="alternate"/>
<author>
<name>Ingersoll, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/152952</id>
<updated>2023-11-14T03:00:52Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis of Error in a Model&#13;
Predictive Irrigation Controller
Ingersoll, Samuel
Significant portions of the world’s agricultural land are vulnerable to desertification, leading to water shortages and changing climate conditions. Smart irrigation controllers could be part of the solution by helping farmers save water and adapt to changing climate without sacrificing yield. This thesis presents an analysis of sensitivity to crop model parameters in the MIT GEAR Lab’s new POWEIr irrigation controller with the goal of making it cheaper and easier to deploy and therefore more accessible. The analysis shows that, of the four crop parameters, the controller is most sensitive to the crop coefficient (K subscript c), moderately sensitive to the maximum rooting depth (Zᵣ), less sensitive to depletion fraction (f subscript d), and almost completely independent of the the yield response factor (K subscript y). This result is potentially useful for designing calibration procedures for the deployment of the POWEIr Controller, especially where there may be limited ability to calibrate the controller.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Galvanic Displacement Across Single-Layer Graphene</title>
<link href="https://hdl.handle.net/1721.1/152951" rel="alternate"/>
<author>
<name>Cunitz, Isabelle</name>
</author>
<id>https://hdl.handle.net/1721.1/152951</id>
<updated>2023-11-14T03:40:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Galvanic Displacement Across Single-Layer Graphene
Cunitz, Isabelle
This work aims to advance the scientific and engineering understanding of galvanic displacement reactions as buffered by a monolayer of graphene, specifically by investigating palladium deposition on graphene on a copper foil substrate via galvanic displacement between the copper and palladium (II) ions in solution. To understand palladium nanoparticle deposition and determine how this process can be controlled, electrochemical thermodynamics and classical nucleation theory are first synthesized into a thermodynamic model of the system. Next, scanning electron microscopy is used to characterize palladium deposition on the graphene/copper surface after galvanic displacement. Copper etch pits are observed to form during the reaction, maintaining contact between the deposition solution and the copper and thereby ensuring that the reaction is not self-limiting under the conditions studied. Palladium is observed to preferentially deposit along atomic steps in the copper foil, at graphene defects where the copper is exposed to the deposition solution, and at etch pits. The effects of varying palladium concentration and graphene/copper surface treatments are characterized, and these results are synthesized to propose a mechanism of palladium deposition via galvanic displacement through graphene. Finally, galvanic displacement is investigated in a novel engineering application, as a method of sealing graphene defects for the synthesis of centimeter-scale nanoporous atomically thin membranes. Palladium nanoparticles deposited on the graphene surface are observed to largely survive graphene transfer to a support membrane substrate, as well as mounting and use in aqueous diffusion cell experiments. However, diffusion experiments show that graphene treated via galvanic displacement has higher leakage than untreated graphene, indicating that under the reaction conditions studied here, galvanic displacement has a net effect of graphene defect enhancement rather than defect sealing. This work contributes new insights regarding galvanic displacement as a method of modifying monolayer graphene, as well as exploring this method in the novel application of membrane separations. With further development, this simple, quick, and inexpensive technique for the fabrication of 2D material/nanoparticle composites may have a myriad of possible applications relevant to medicine, sustainability, and beyond.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Synthesis and Electronic Transport of Natural Superlattice Compounds</title>
<link href="https://hdl.handle.net/1721.1/152950" rel="alternate"/>
<author>
<name>Chen, Alan</name>
</author>
<id>https://hdl.handle.net/1721.1/152950</id>
<updated>2023-11-14T03:16:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Synthesis and Electronic Transport of Natural Superlattice Compounds
Chen, Alan
The study of periodic structures and their impact on states of matter is essential in condensed matter physics. The analysis of this periodicity led to the modern understanding of electronic properties through the band structure. Advances in materials synthesis and discovery have led to precise control over electronic properties via control of the atomic structure. One family of materials in which this has been explored are van der Waals (vdW) materials. In addition to their study as bulk crystalline specimens, the two-dimensional nature of these materials enables the development of artificial heterostructures with a diverse range of electronic states of matter. The ability to in turn design bulk crystals containing such heterostructures would enable access to a broader range of experimental techniques and potential new electronic states. In this thesis, we present a synthesis study of natural superlattices composed of transition metal dichalcogenide (TMD) monolayers alternating with spacer layers. These superlattices belong to the TMD family with chemical formula MS₂, M = (V, Nb, Mo, W). We study one such compound, Sr-VS₂, through electronic transport measurements including evidence for an insulating state therein. We further discuss syntheses of Group-VI TMD superlattices and the potential physics such systems may support.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery-Free Wireless Imaging of Underwater Environments</title>
<link href="https://hdl.handle.net/1721.1/152949" rel="alternate"/>
<author>
<name>Akbar, Waleed</name>
</author>
<id>https://hdl.handle.net/1721.1/152949</id>
<updated>2023-11-14T03:19:05Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Battery-Free Wireless Imaging of Underwater Environments
Akbar, Waleed
Imaging underwater environments is of great importance to marine sciences, ocean sustainability, climatology, defense, marine robotics, geology, space exploration, and global food security. Despite advances in underwater imaging, most of the ocean and marine organisms remain unobserved and undiscovered. Existing methods for underwater imaging are unsuitable for scalable, long-term, in situ observations because they require tethering for power and communication. Here we describe underwater backscatter imaging, a method for scalable, real-time wireless imaging of underwater environments using fully-submerged battery-free cameras. The cameras power up from harvested acoustic energy, capture color images using ultra-low-power active illumination and a monochrome image sensor, and communicate wirelessly at net-zero-power via acoustic backscatter. We demonstrate the potential of this method in wireless battery-free imaging of animals, plants, pollutants, and localization tags in enclosed and open-water environments. The method’s self-sustaining nature makes it desirable for massive, continuous, and long-term ocean deployments with many applications including marine life discovery, submarine surveillance, and underwater climate change monitoring.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating the Infrastructure Requirements of a Low-Carbon Hydrogen Supply Chain in Germany and theGulf Coast</title>
<link href="https://hdl.handle.net/1721.1/152948" rel="alternate"/>
<author>
<name>Sizaire, Paul</name>
</author>
<id>https://hdl.handle.net/1721.1/152948</id>
<updated>2023-11-14T03:26:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Evaluating the Infrastructure Requirements of a Low-Carbon Hydrogen Supply Chain in Germany and theGulf Coast
Sizaire, Paul
The increasing political momentum advocating for decarbonization efforts, in Europe and elsewhere, has led many governments to unveil national hydrogen strategies. Hydrogen is viewed as a potential enabler of deep decarbonization, notably in hard-to-abate sectors such as the industry. A novel optimal low-carbon hydrogen network algorithm was developed to assess the supply chain requirements of systems with increasing electrolytic hydrogen production levels. This model was used to investigate the low-carbon hydrogen procurement strategies of Germany and the Gulf Coast, with a focus on industrial demand.&#13;
&#13;
An initial case explored a self-sufficiency scenario in which the studied region would domestically procure hydrogen with electrolytic production. Results show important synergies between electrolytic production powered by a mix of renewables, large-scale hydrogen storage in the form of salt caverns, and hydrogen pipelines. The optimal power mix in the Gulf Coast consists of a majority of wind turbines, while Germany deploys a larger share of solar panels. The levelized cost of hydrogen, which includes storage and transport, totals ~$5.5-6.2kgH₂ in the Gulf Coast (2025), and 4.9-6.1 €/kgH₂ in Germany (2025). Replacing salt caverns with compressed and liquid tank storage drastically changes the system, which deploys more renewable capacity to avoid storage needs but ultimately increases curtailment, driving costs up by ~$1/kgH₂ in the Gulf Coast and 1.0-2.2 €/kgH₂ in Germany. This calls for a centralized approach to building out the supply chain, requiring extensive stakeholder collaboration. Furthermore, optimal electrolytic production requires low capacity factors (40-70%) to truly achieve low-carbon status with renewable electricity at all times, which impacts the levelized cost of hydrogen and keeps it high (&gt;$4 (and €)/kgH₂) even in 2050. It was found that electricity storage is not economical to increase electrolytic capacity factors at times of low renewable production. &#13;
&#13;
Natural gas-derived production was found to be significantly impacted by upstream supply chain emissions of electricity and natural gas. Maintaining such production will require important reductions of the methane leakage rate and electricity carbon footprint, alongside a high carbon capture rate at the process level. Finally, in the case of Germany, pipeline imports from neighboring countries were found to have important systemic benefits and provide a viable pathway to decarbonization, but the local large-scale storage of these potentially variable imports should not be overlooked.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis Driven Shape Design of Parametric Geometry Using B-Splines and Free-Form Deformation</title>
<link href="https://hdl.handle.net/1721.1/152947" rel="alternate"/>
<author>
<name>Gomez, Marlena</name>
</author>
<id>https://hdl.handle.net/1721.1/152947</id>
<updated>2023-11-14T03:27:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis Driven Shape Design of Parametric Geometry Using B-Splines and Free-Form Deformation
Gomez, Marlena
This thesis presents a method for local aerodynamic shape optimization to morph parametric geometry. The goal is a general framework for analysis driven shape design that combines CAD-like parametric solid model geometry construction with free-form-like local deformation. One method explored uses the control point locations of certain B-splines defining the geometry as design parameters during optimization. A second approach builds on this method, using free-form deformation (FFD) to morph geometry within an FFD box. Analytic geometry inside the FFD box which is not by default defined by B-splines is converted to a B-spline representation, and free-form deformation is then used to move the B-spline control point net. In this method, the control point locations of the FFD box serve as design parameters, which allows for the generation of smooth geometry while keeping the number of degrees of freedom manageable for the optimizer. The first technique is applied to an optimization case where the &#119871;²-norm difference between an airfoil shape and a target shape is minimized. Then, the technique is demonstrated on an optimization driven by computational fluid dynamic analysis (CFD) where drag of an airfoil geometry is minimized. Lastly, the B-spline method is applied to an optimization of a wingtip surface, where the objective function minimizes drag while maintaining the initial lift of the shape. The FFD technique is similarly applied to an airfoil &#119871;²-norm difference minimization, and a wingtip &#119871;²-norm difference minimization. Finally, the FFD technique is demonstrated on design cases driven by CFD analysis for an airfoil.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Diel vertical migration and frontal variability of&#13;
acoustic backscatter in the Balearic Sea</title>
<link href="https://hdl.handle.net/1721.1/152945" rel="alternate"/>
<author>
<name>Cheslack, Helena R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152945</id>
<updated>2023-11-14T03:33:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Diel vertical migration and frontal variability of&#13;
acoustic backscatter in the Balearic Sea
Cheslack, Helena R.
Acoustic Doppler current profilers (ADCPs) use active sonar to measure current velocities by measuring the sound returned by scatterers (most often zooplankton) in the water column. The volume of scatterers, or echo intensity, has been used to measure the abundance of zooplankton and characterize diel vertical migration (DVM). DVM is the mass vertical movement of zooplankton and fish between the surface waters where they feed at night, and the mesopelagic zone where they avoid predators during the day; it is considered the largest migration of biomass on Earth, happens in every ocean, and is important to the global carbon cycle.&#13;
&#13;
This thesis uses a combination of data that I helped acquire during the Office of Naval Research-funded CALYPSO 2022 field campaign in the Balearic Sea. Acoustic backscatter from a 38kHz ADCP and a 150kHz ADCP is translated into mean volume backscattering strength (MVBS) to characterize the sound scattering layers (SSLs) in the Balearic Sea. WireWalker data is used to model subsurface light. The MVBS is compared to measurements of temperature, salinity, chlorophyll concentration, and dissolved oxygen (DO) from the EcoCTD, a towed instrument that simultaneously measures hydrographic and biological parameters. The analysis reveals one permanent scattering layer at 300m – 600m and two migrating scattering layers in the top 50m and between 100m – 300m. The layers are likely made up of zooplankton like krill and pteropods and pelagic fish. The speed of vertical migration ranges from 1 – 11cms⁻¹, and migrators are follow isolumes during migration times. DVM has the strongest effect on backscatter anomalies, but during daytime and nighttime, DO is most correlated with the backscatter anomaly.&#13;
&#13;
We demonstrate that ADCPS can be used to characterize SSLs and DVM. The uniquely co-located EcoCTD data from CALYPSO enables us to compare the frontal variability in scatterers to variability in biological and physical parameters. Characterizing the SSLs, DVM, and frontal variability of acoustic backscatter furthers understanding of the global carbon cycle.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Qubit Quest Decoded: A Mixed-Methods Analysis of Innovation Policies and Ecosystem Mapping in the Race for Quantum 2.0 Technologies</title>
<link href="https://hdl.handle.net/1721.1/152892" rel="alternate"/>
<author>
<name>Sandoval Sandoval, Jorge I.</name>
</author>
<id>https://hdl.handle.net/1721.1/152892</id>
<updated>2023-11-03T03:31:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Qubit Quest Decoded: A Mixed-Methods Analysis of Innovation Policies and Ecosystem Mapping in the Race for Quantum 2.0 Technologies
Sandoval Sandoval, Jorge I.
This thesis presents a comprehensive examination of the evolution and dynamics of emerging Quantum 2.0 technologies through the lens of innovation ecosystems. Utilizing a mixed-methods approach that incorporates both qualitative and quantitative data, the study offers a cross-country perspective segmented by various technological, social, and policy factors. The manuscript begins with an in-depth review of the literature, capturing the current state, challenges, and scientific discourse surrounding Quantum 2.0 technologies. It then introduces an "innovation ecosystems" framework to contextualize the complex interplay of policies, strategies, and stakeholder dynamics. The concept of an "innovation pipeline" is further developed, informed by a variety of sources to draft a timeline that traces the emergence and diversification of Quantum 2.0 technologies, primarily within the U.S. context. &#13;
A scientometric analysis of global quantum-related publications, U.S. patents, and worldwide venture capital investments provides a broad view of the landscape from 2010 to 2022. This data-driven approach uncovers patterns of collaboration and topic divergence, and assesses the variation in the sequential production of knowledge artifacts. The study highlights the top ten global players in the field and leverages a keyword co-occurrence analysis to further elaborate on the trends and ideas influencing Quantum Information Science (QIS).&#13;
Overall, the dissertation provides valuable insights into the current state of strategic policy approaches on the nascent ecosystems of Quantum 2.0 technologies. The developed analytical frameworks serve as a reference for understanding coherence in policy actions and funding allocations, offering guidelines for future strategic innovation in both public and private sectors engaged in large-scale technological projects.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Regulatory benchmarking by machine learning: The case of climate resilience in electric utilities</title>
<link href="https://hdl.handle.net/1721.1/152891" rel="alternate"/>
<author>
<name>Lyu, Beichen</name>
</author>
<id>https://hdl.handle.net/1721.1/152891</id>
<updated>2023-11-03T03:37:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Regulatory benchmarking by machine learning: The case of climate resilience in electric utilities
Lyu, Beichen
Regulatory targets are becoming increasingly complex to benchmark, including electric utilities’ climate resilience (“utility resilience”) that are non-linear and highdimensional. Meanwhile, machine learning (ML) models have been developed, and continue to be developed, with desirable properties such as local optimality and data compression. To explore the synergy between ML and benchmarking, we review and discuss the literature from both sides in the context of government regulation.&#13;
&#13;
Then we dive into a case study of utility resilience, where the dual complexities of the climate and power systems converge, with climate impacts that are likely to harm resilience and increase risks. However, these complicated and changing climate impacts are overlooked in the current regulations of utility resilience [30]. We examine how benchmarking could be applied to fill this regulatory gap through performance incentive mechanisms and elaborate on the political-economic implications, both advantages and potential pitfalls, of its application.&#13;
&#13;
With these theoretical understandings, we experiment with benchmarking weatherrelated power outages in New England, US between 2010-2021. We propose a data regime by combining station-level weather data with district-level outage data, as well as a baseline model using ridge regression. We also deploy our model through an online portal, as well as discuss its limitations on long-tail distributed outage and weather data. Our studies could inform future ML-based benchmarking for regulatory uses, particularly over utility resilience, that balances accuracy, accessibility, and applicability.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracking Dust Plumes and Identifying Source Areas Using Spatiotemporal Clustering of Remote Sensing Data</title>
<link href="https://hdl.handle.net/1721.1/152889" rel="alternate"/>
<author>
<name>Alnasser, Faisal</name>
</author>
<id>https://hdl.handle.net/1721.1/152889</id>
<updated>2023-11-03T03:30:53Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tracking Dust Plumes and Identifying Source Areas Using Spatiotemporal Clustering of Remote Sensing Data
Alnasser, Faisal
Traditionally, studies on dust relied on polar-orbiting satellites whose limited tempo-ral coverage does not oﬀer a detailed picture of how dust plumes evolve and change over time. To address this, we develop a method to identify and track individual dust plumes via hourly images from the Meteosat Second Generation Spinning Enhanced Visible and Infrared Imager (SEVIRI) instrument on the Eumetsat geostationary or-bit satellites. Our framework uses the SEVIRI Dust RGB false color composite to highlight airborne dust in images. We then use the DBSCAN machine learning algo-rithm to cluster pixels into plumes based on their spatial and temporal connectivity. Through careful analysis and processing, we are able to analyze properties such as the storm’s source area, distance traveled, and aﬀected areas. Through our framework, we gain insights into dust storm sources, emission factors such as soil moisture, wind speed, and vegetation, and their seasonal eﬀects, which are key for understanding dust impacts on air quality, health, and the environment. To illustrate the eﬀective-ness of our methodology, we conduct comprehensive case studies on several prominent dust-emitting regions: the Bodélé Depression, Southern Iraq, the Syrian Desert, and the Sistan basin. These case studies shed light on the complex eﬀects of drought and the interplay between soil moisture and vegetation as well as their eﬀects on plume properties. Providing an understanding of the diﬀerent variables contributing to dust storm dynamics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Subsurface Digital Twin and Emergence</title>
<link href="https://hdl.handle.net/1721.1/152888" rel="alternate"/>
<author>
<name>Zhao, Yushi</name>
</author>
<id>https://hdl.handle.net/1721.1/152888</id>
<updated>2023-11-03T03:35:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Subsurface Digital Twin and Emergence
Zhao, Yushi
Subsurface characterization stands at the nexus of humanity's growing demands for materials, energy, and safety amid the burgeoning population and rising living standards. However, challenges in subsurface characterization, rooted in conventional practices, functional silos, limited data density, and technological constraints, impede business efficacy and sustainable development. As societies' expectations shift and industries evolve, a paradigm shift is required in the human-machine relationship and the way we organize work. To meet these challenges and ensure responsible human progress, a systematic solution is needed.&#13;
&#13;
This thesis investigates the concept of a subsurface digital twin as a boundary object that bridges disciplines, scales, and uncertainties, fostering collaboration and real-time informed decision-making. It explores the evolution of subsurface characterization from data-sparse and theory-dependent practices to a holistic digital twin framework. The thesis identifies critical technical and sociotechnical challenges, including data scarcity, overreliance on empirical relationships, functional silos, and trust. The thesis demonstrates how a subsurface digital twin can enhance cross-functional collaboration and address critical challenges through real-world examples. It highlights the use of geoanalytics and machine learning to predict total organic carbon content and formation brittleness, showcasing the digital twin's power in multidisciplinary workflows. Furthermore, it proposes a solution for uncertainty reduction through integration and laid out future steps for the development of the subsurface digital model, construction of pseudo/surrogate models for probabilistic simulation complex and time-consuming numerical simulations, and use of the digital twin to bridge workflows between data-rich and data-scarce regions across scales.&#13;
&#13;
The thesis outlines the design and value-creating functions of the subsurface digital twin system, facilitating adaptive resolution and agile implementation. It envisions a future where such digital twins revolutionize decision-making, from individual project optimization to enterprise-wide insights. The thesis underscores the importance of strategic investment in digital twins for long-term returns and as a cornerstone of the evolving human-machine relationship and advances the concept of a subsurface digital twin as a transformative approach to subsurface characterization, fostering collaboration, tackling challenges, and paving the way for sustainable progress in a rapidly changing world.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficient Multi-Sensor Fusion for 3D Perception</title>
<link href="https://hdl.handle.net/1721.1/152887" rel="alternate"/>
<author>
<name>Shao, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/152887</id>
<updated>2023-11-03T03:58:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Efficient Multi-Sensor Fusion for 3D Perception
Shao, Kevin
As a critical component to realizing widespread autonomous driving, 3D perception systems have come to be heavily studied in the community. However, many solutions are solely focused on merely achieving the highest accuracy – overlooking other practical considerations such as speed and cost. In this thesis, I develop two multisensor fusion models for 3D Perception: BEVFusion, a camera-LiDAR fusion model, and BEVFusion-R, a camera-radar fusion model. BEVFusion seeks to balance accuracy and speed. By fusing features from each input modality in the shared bird’s eye view space, it captures both semantic and geometric information from each input. Its simple design allows it to achieve both state-of-the-art accuracy and a 24% speedup over competing works. BEVFusion-R further incorporates cost and hardware deployment into the design consideration. By carefully designing the entire model with both performance and acceleration, BEVFusion-R achieves a 2.1% NDS improvement on nuScenes over the previous state-of-the-art with a 4.5× measured speedup. Additionally, it is capable of real-time latency on edge GPUs. The code will be publicly released at https://github.com/mit-han-lab/bevfusion
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Stable Reinforcement Learning in&#13;
Non-Episodic Tasks</title>
<link href="https://hdl.handle.net/1721.1/152886" rel="alternate"/>
<author>
<name>Karnik, Sathwik</name>
</author>
<id>https://hdl.handle.net/1721.1/152886</id>
<updated>2023-11-03T03:52:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Towards Stable Reinforcement Learning in&#13;
Non-Episodic Tasks
Karnik, Sathwik
Despite recent advances in deep reinforcement learning (RL), deploying RL policies in robotics often leads to various challenges. The typical training paradigm in RL involves the rollouts of policies executed in a finite horizon or episodes. However, such policies may struggle to generalize well in various non-episodic tasks, including both object manipulation and locomotion. In this thesis, we study the challenges that arise from non-episodic tasks in two settings: (1) object manipulation in the Habitat Home Assistant Benchmark (HAB) [18] and (2) locomotion in the MuJoCo suite [20]. &#13;
&#13;
In the first of these two settings, we study the failure modes of the baseline methods and characterize much of the failures as being due in part to the instabilities in object placement and the lack of error recovery in the setting of open-loop task planning. We consider a possible approach to address this issue by modifying the steady-state termination condition in the RL objective to place the object at the goal position for a longer horizon. We next consider an error-corrective policy using inverse-kinematics (IK) following the execution of the RL policy. The integration of an IK policy leads to a significant improvement in the final task success rate from 41.8% to 65.3% in SetTable, one of the three tasks in the HAB.&#13;
&#13;
In the second setting, we consider extrapolation in the non-episodic task of locomotion in the MuJoCo suite. Typical RL policies are trained for a finite horizon, but may need to be executed for a longer horizon during deployment in locomotion tasks. However, current RL approaches may fail to generalize beyond the training horizon. To address this issue, we consider the use of time-to-go embeddings as part of the observations. Specifically, we introduce the use of a constant time-to-go embedding in the setting where the horizon is much longer during evaluation or deployment. We find limited evidence of improvements in the average episode returns during evaluation in 6 tasks in the MuJoCo suite.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Transit Oriented Development using Satellite&#13;
Imagery: Riyadh vs. Phoenix</title>
<link href="https://hdl.handle.net/1721.1/152882" rel="alternate"/>
<author>
<name>Almazroa, Noor</name>
</author>
<id>https://hdl.handle.net/1721.1/152882</id>
<updated>2023-11-03T03:01:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessing Transit Oriented Development using Satellite&#13;
Imagery: Riyadh vs. Phoenix
Almazroa, Noor
As urbanization becomes the way of the future, the demands on the cities are becoming more urgent, with an increased awareness of the need for sustainability and resilience, making the utilization of today’s Technology and data critical in decisionmaking and planning. In the first part of this thesis, I combine a few of these techniques and datasets to explore their ability to provide a helpful assessment of Transit- Oriented Development (TOD). This research assesses the transit-oriented characteristics in two cities, Riyadh, Saudi Arabia, and Pheonix City, Arizona, US. Both share many similarities in urban design and climate. I use high-resolution satellite imagery with Computer Vision methods to detect the built area around public transit stations to measure the building density and, combined with land use data, measure the residential and nonresidential density. Both of these measurements are important indicators of the success of a public transportation system. I found that out of the two different building detection methods, the one based on deep learning techniques was more precise, with better generalization abilities. While the method based on classical image processing techniques is more sensitive to threshold choices, with considerable variability when tested on different years. Both methods, however, were able to give a useful prediction of buildings. And from their results, I found that Phoenix City has a building density of less than 50%, even around the busiest stations downtown stations. Riyadh, on the other hand, is more compact and with at least more than 50% of the land being developed. In the second part, I formulate a System Dynamics that is validated by Phoenix’s actual ridership for the 2010-2020 period and predicts transit ridership in Riyadh. The model closely approximated Phoenix’s ridership up until 2016. The Riyadh model estimated that the ridership would start with six million riders, surpassing the predictions of the Royal Commission for Riyadh City (RCRC) of 1.6 million initially. The results of both parts indicate that given that Riyadh is more densely built with a smaller area and has a more extensive transportation system and bigger population, this should serve as an incentive to promote a more transit-oriented built environment by increasing walkability and dense mixed-use developments throughout the city.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Transfer Learning for Macroscale Defect Detection in Semiconductor Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/152881" rel="alternate"/>
<author>
<name>Waterworth, John Timothy</name>
</author>
<id>https://hdl.handle.net/1721.1/152881</id>
<updated>2023-11-03T03:30:31Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Deep Transfer Learning for Macroscale Defect Detection in Semiconductor Manufacturing
Waterworth, John Timothy
This thesis proposes improvements to wafer macro inspection processes and tools on four axes at Texas Instruments. The major axis of improvement involves real-time machine learning recommendations regarding the presence of macroscale defects. In this work, a model for detecting central defects is described in detail, and a novel approach to overcoming data scarcity through the creation of synthetic data is deployed. The binary classifier model achieves an out-of-distribution area under curve (AUC) of 0.909 for detecting hotspot defects. Detection for other classes of central defects is also explored but limited by even greater data sparsity. Models for catching spin on glass defects and edge defects are also trained with out-of-distribution AUCs reaching 0.927 and 0.906 respectively. Other axes of improvement covered in this thesis involve gauge reliability and repeatability analysis of macro inspection tools, the creation of a new user interface called OwlView, and the trial of a new macro inspection system used in-line on photolithography tools for greater efficiency. Gauge repeatability and reliability analysis gives insight into tool function and assists the team and technicians in root cause analysis. Several hardware failures of current toolsets are identified and addressed. Maintenance procedures are also updated to keep tools operating within specifications. The OwlView interface is developed with features to increase user efficiency. Additionally, the interface helps create an infrastructure for tagging more data, which will be fed back into the models to address data scarcity. Lastly, an in-line inspection trial shows achievable high quality wafer images compatible with the machine learning and inspection infrastructures developed in this work.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the accessibility and usability of motion capture&#13;
technology: design and development of indoor MoCap&#13;
hardware system</title>
<link href="https://hdl.handle.net/1721.1/152880" rel="alternate"/>
<author>
<name>Chang, Cheng</name>
</author>
<id>https://hdl.handle.net/1721.1/152880</id>
<updated>2023-11-03T03:30:37Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing the accessibility and usability of motion capture&#13;
technology: design and development of indoor MoCap&#13;
hardware system
Chang, Cheng
Motion capture technology (MoCap) is a revolutionary method to translate real-world subjects’ movements into digital content across various industries, including robotics, medical devices, gaming, and biomechanics. This paper investigates how to make MoCap more accessible and usable to a broader and more diverse audience. Endorsing a user-centric design and development approach, the researchers defined the problem statement as wider acceptance and adoptions of the MoCap technology. Subsequently, a comprehensive market research and real-world MoCap guided how researchers would brainstorm solutions. After carefully considering factors such as camera angles, pole styles, height, light conditions, etc., researched also incorporated various related sensors, such as vibration meters and distance sensors, to generate the functional prototypes and test their ideas. Compared with traditional motion capture devices, the resulting MoCap system demonstrates an easier way to deploy MoCap and a steadier system under consistent vibrations. This improved accessibility and stability allows not only scientists and researchers but also sports coaches, doctors, or students to use MoCap effectively. In conclusion, this research contributes to bring MoCap technology a wider adoption and more practical applications. Meanwhile, the system’s structural stability, manufacturing method, intergration with other sensors, and reliance on Sony RX0 cameras with resolution and frame limitation can be optimized in the future to meet an even broader user need.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluating SigmaOS with Kubernetes for Orchestrating Microservice and Serverless Applications</title>
<link href="https://hdl.handle.net/1721.1/152879" rel="alternate"/>
<author>
<name>He, Yizheng</name>
</author>
<id>https://hdl.handle.net/1721.1/152879</id>
<updated>2023-11-03T03:52:27Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Evaluating SigmaOS with Kubernetes for Orchestrating Microservice and Serverless Applications
He, Yizheng
SigmaOS is a new multi-tenant cloud operating system that simplifies distributed application development. Its design centers around the novel concepts of realms and procs. A realm presents a tenant with a shared global namespace that hides the machine boundaries. Tenants structure their applications as process-like procs interacting through the realm’s namespace. Procs are lightweight, stateful, and can communicate. SigmaOS manages the scheduling and execution of procs to achieve high resource utilization and performance isolation.&#13;
&#13;
This thesis compares SigmaOS with Kubernetes, a mainstream cloud operating system, using a microservice-style social network website and a serverless image resizing program. It measures their performances on a small-scale cluster in CloudLab. The SigmaOS version of the social network is easier to build (30% fewer lines), and its image resizing starts faster (25% - 89%). SigmaOS performs comparably to Kubernetes regarding latency and resource consumption when running a single application but provides better performance isolation when running multiple applications in separate realms: latency increases by 4-11% with concurrent applications in SigmaOS versus over 150% in Kubernetes.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Aggressive Aerial Grasping using a Soft Drone with Onboard Perception</title>
<link href="https://hdl.handle.net/1721.1/152878" rel="alternate"/>
<author>
<name>Ubellacker, Samuel</name>
</author>
<id>https://hdl.handle.net/1721.1/152878</id>
<updated>2023-11-03T03:48:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Aggressive Aerial Grasping using a Soft Drone with Onboard Perception
Ubellacker, Samuel
Contrary to the stunning feats observed in birds of prey, aerial manipulation and grasping with flying robots still lack versatility and agility. Conventional approaches using rigid manipulators require precise positioning and are subject to large reaction forces at grasp, which limit performance at high speeds. The few reported examples of aggressive aerial grasping rely on motion capture systems, or fail to generalize across environments and grasp targets. We describe the first example of a soft aerial manipulator equipped with a fully onboard perception pipeline, capable of robustly localizing and grasping visually and morphologically varied objects. The proposed system features a novel passively closing tendon-actuated soft gripper that enables fast closure at grasp, while compensating for position errors, complying to the target-object morphology, and dampening reaction forces. The system includes an onboard perception pipeline that combines a neural-network-based semantic keypoint detector with a state-of-the-art robust 3D object pose estimator, whose estimate is further refined using a fixed-lag smoother. The resulting pose estimate is passed to a minimumsnap trajectory planner, tracked by an adaptive controller that fully compensates for the added mass of the grasped object. Finally, a finite-element-based controller determines optimal gripper configurations for grasping. Rigorous experiments confirm that our approach enables dynamic, aggressive, and versatile grasping. We demonstrate fully onboard vision-based grasps of a variety of objects, in both indoor and outdoor environments, and up to speeds of 2.0 m/s— the fastest vision-based grasp reported in the literature. Finally, we take a major step in expanding the utility of our platform beyond stationary targets, by demonstrating motion-capture-based grasps of targets moving up to 0.3 m/s, with relative speeds up to 1.5 m/s.&#13;
&#13;
Video Attachment: https://www.youtube.com/watch?v=HF4M7TooqfE
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing Bovine Methane Emissions: Respiratory Simulation and Optical Gas Imaging Methods</title>
<link href="https://hdl.handle.net/1721.1/152877" rel="alternate"/>
<author>
<name>Huang, Zhong Qian</name>
</author>
<id>https://hdl.handle.net/1721.1/152877</id>
<updated>2023-11-03T03:51:41Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessing Bovine Methane Emissions: Respiratory Simulation and Optical Gas Imaging Methods
Huang, Zhong Qian
Bovine methane emissions contribute significantly to global greenhouse gas levels. Quantifying these emissions is of key importance in mitigating their production. This work investigates the physical simulation of cattle breaths and the assessment of their methane content through the use of optical gas imaging (OGI). A physical respiratory simulator was designed and built to replicate cow exhalations in controlled laboratory settings, successfully emulating breath flow, tidal volume, respiration rate, temperature, and methane concentration. The simulator was used in infrared imaging experiments that demonstrated the feasibility of using OGI as a technique for measuring breath methane content. To visualize breath gas plumes, image processing methods were developed, encompassing background subtraction, frame differencing, and optical flow. These methods enabled the characterization of plume intensity and movement dynamics under varying concentrations and temperatures. Quantification techniques were developed to compute a measure of breath methane content from thermal video footage. Detected methane intensity exhibited a positive linear correlation with breath methane concentration within the range of 1000 - 4000 ppm. The influence of breath exit temperature on detected methane intensity was found to be minimal, with intensity primarily scaling with the difference between ambient air temperature and background temperature. These observed trends were found to be in alignment with those predicted by theoretical models.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Semi-analytical Model for Nonlinear Elliptical Inclusions with Spherical Eigenstrains</title>
<link href="https://hdl.handle.net/1721.1/152876" rel="alternate"/>
<author>
<name>Bonavia, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/152876</id>
<updated>2023-11-03T03:16:26Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Semi-analytical Model for Nonlinear Elliptical Inclusions with Spherical Eigenstrains
Bonavia, Joseph
Motivated to understand the stresses induced by the formation of precipitates in metals,&#13;
in 1957, John D. Eshelby provided a fully-analytical solution for the stress and&#13;
deformation fields induced by an incompatible ellipsoidal inclusion embedded within&#13;
an infinite matrix. Over the past six decades, his theory, which considers linearly elastic&#13;
materials, has been essential in developing homogenized micromechanical models&#13;
for metals and composites. However, as solid mechanics research increasingly focuses&#13;
on soft materials such as biological tissues, a linear theory is no longer sufficient.&#13;
Despite numerous potential applications ranging from medical diagnosis to industrial&#13;
manufacturing processes, an accurate analytical or semi-analytical nonlinear extension&#13;
of Eshelby’s theory of the elliptical inclusion problem has yet to be developed.&#13;
This work presents a novel approach to solve the 2D elliptical inclusion problem,&#13;
which satisfies incompressibility. It is shown to converge to the Eshelby solution in&#13;
the linear limit for the case of isotropically growing inclusions. Moreover, this model&#13;
matches almost identically to 2D finite element simulations for large incompatibilities,&#13;
far beyond the linear range, while providing a complete description of the field&#13;
through a single function. Finally, it is suggested that the simplified solution can enable&#13;
the use of homogenization methods for future nonlinear micromechnical models&#13;
and can help to elucidate various growth phenomena observed in nature.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Recurrent Metastatic Events</title>
<link href="https://hdl.handle.net/1721.1/152875" rel="alternate"/>
<author>
<name>Singh, Harveer</name>
</author>
<id>https://hdl.handle.net/1721.1/152875</id>
<updated>2023-11-03T03:03:34Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling Recurrent Metastatic Events
Singh, Harveer
Progression of cancer is marked by metastatic spread, with certain tumors preferentially spreading to specific organ sites – known as organotropism. The site of metastatic spread can significantly impact the mortality of cancer patients, for example with metastases to the brain being highly lethal, but the underlying mechanisms are poorly understood. Here, we aim to characterize the genetic landscape of metastatic drivers to specific organ sites using large-scale tumor sequencing and medical record data. We propose and evaluate a recurrent event survival model that draws additional statistical power from patients with multiple metastases, while modeling loss to follow up and mortality. We analyze tumor sequencing data from over 15,000 unique patients across 8 primary cancers and 7 target organ sites to identify genetic drivers of organotropism among a panel of 547 genes. We identify 1,130 somatic alterations significantly associated with organotropism, including 171 associations with brain metastases. We train a penalized predictive model that can accurately identify individuals at high risk for metastases to specific organ sites in held out samples. For example, the predicted top 10% of non-small cell lung cancer patients exhibit a hazard ratio of 1.96 for brain metastases relative to the bottom 10%. Our results demonstrate the power of recurrent event modeling in a real world clinical cohort to characterize the genetic landscape of organotropic events.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Pipeline for Synthesizing Action-conditioned Human Motion from Raw Motion Capture Data</title>
<link href="https://hdl.handle.net/1721.1/152874" rel="alternate"/>
<author>
<name>Tiwari, Ritaank</name>
</author>
<id>https://hdl.handle.net/1721.1/152874</id>
<updated>2023-11-03T04:01:02Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Pipeline for Synthesizing Action-conditioned Human Motion from Raw Motion Capture Data
Tiwari, Ritaank
In many sports, less-experienced trainees will often draw inspiration from videos of experts. While this can be an effective tool for improvement, this process lacks the ability for the trainee to specifically focus on improving their skills based on the limitations of their current abilities, body type, and weaknesses.&#13;
&#13;
Since sports are very competitive, there exists a need to convert expert movements to a series of standardizable forms and movements that can then be pedagogically applied to the differing needs of various trainees: specifically, their different abilities, body types, and weaknesses.&#13;
&#13;
Effectively, this conversion requires a pipeline that can take an input of motion capture data, automatically label the markers used, create a skeletal representation, and then train a machine learning model to accurately synthesize human motion, conditioned on the action type.&#13;
&#13;
The outputted motions can be rendered for any body type, and could be customized to the trainee. The designed pipeline is not fencing specific – it is highly adaptable to the nature of the data or sport, robust to errors and noise, as well as tightly integrated in an easy-to-use library.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation Project Funding: A Framework for Rapid Evaluation of Innovation Projects for Implementation Using a System Approach</title>
<link href="https://hdl.handle.net/1721.1/152873" rel="alternate"/>
<author>
<name>Gonzalez, Nicholas Ciro</name>
</author>
<id>https://hdl.handle.net/1721.1/152873</id>
<updated>2023-11-03T03:57:40Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Innovation Project Funding: A Framework for Rapid Evaluation of Innovation Projects for Implementation Using a System Approach
Gonzalez, Nicholas Ciro
Success rates of corporate innovation are notoriously low. Improving corporate innovation success rates increases investment efficiency and enables progress toward an improved future. A literature review was completed to develop an understanding of innovation strengths and weaknesses often present in corporations. System engineering and quantitative analysis tools were explored to address the common weaknesses present in corporate innovation investment. The investment step was targeted as a critical decision point for progressing proposals forward for further implementation. The framework mitigates common pitfalls of corporate innovation while enabling the corporation to architect the innovation process to fit its needs. The framework is a five-step process: risk rank to define the predictors of innovation project success, establish a success function to calculate innovation success likelihood, solicit project proposals from the entire employee base, plot a tradespace to visualize the tradeoffs between all possible innovation projects, and finally select the portfolio of projects for investment.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A LIGO Double Pendulum Suspension Prototype for Reducing Unwanted Cross-Couplings</title>
<link href="https://hdl.handle.net/1721.1/152872" rel="alternate"/>
<author>
<name>Lee, Regina E.</name>
</author>
<id>https://hdl.handle.net/1721.1/152872</id>
<updated>2023-11-03T03:01:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A LIGO Double Pendulum Suspension Prototype for Reducing Unwanted Cross-Couplings
Lee, Regina E.
The Laser Interferometer Gravitational-Wave Observatory (LIGO) is a Michelson Interferometer with 4km long arms used to detect gravitational waves passing through earth. LIGO uses extremely isolated optics in the form of suspensions to measure slight changes in laser path length down the two interferometer arms. Unwanted cross-couplings between degrees of freedom in LIGO suspensions pose a large problem when trying to isolate their optics in the interferometer.&#13;
&#13;
This thesis provides an analysis of the effects of changing the wire geometry of a double pendulum as a case study for LIGO pendulum designs. By changing the wire attachment point to be closer to the center of mass, we are able to see a decrease in longitudinal-to- pitch coupling by about a factor of 3. We observe that the pitch-to-pitch coupling decreases by a factor of approximately 1.5 at DC when comparing the new four wire configuration to the original two wire configuration. However, the first pitch resonance increases slightly. This resonance is most influenced by a combination of the wire attachment point and spring stiffness. The resonance can be moved around by changing these factors.&#13;
&#13;
This project has two main components. The first is a state space model that describes the equations of motion for the double pendulum and is used to predict dynamic responses. The second is the construction of a physical double pendulum prototype which is used to verify results from the model. The experimental results show differences in dynamics compared to the state space model due to forcing off center, and the model was updated to include these dynamics. The physical pendulum is set up outside of vacuum and is not manufactured to the tight tolerance of real LIGO suspensions. Therefore, we do not have the precision necessary to experimentally attach the wires directly at the center of mass and did not measure these transfer functions. In conclusion, our observations lead us to believe that suspending the top mass with four wires is beneficial to reducing the longitudinal-to-pitch coupling. However, it is necessary to align the pivot point of the wires to the actuation point in order to demonstrate this. Future research can be done on placing the wire pivot point directly at the actuation point.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a market-ready tractor for small farms in low- and middle-income countries</title>
<link href="https://hdl.handle.net/1721.1/152871" rel="alternate"/>
<author>
<name>Goldbach, Collin J.</name>
</author>
<id>https://hdl.handle.net/1721.1/152871</id>
<updated>2023-11-03T03:01:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design of a market-ready tractor for small farms in low- and middle-income countries
Goldbach, Collin J.
This paper presents the design, testing and user feedback of a new prototype of a tractor platform intended for use on small, resource-constrained farms. This development builds on past work by implementing several upgrades which promote market competitiveness and maximize functionality, ergonomics, and aesthetics. Stakeholder discussions, review of prior art and recommendations from past authors were used to draft new functional requirements for a better vehicle. Hydraulic power systems were implemented that significantly improve user comfort by automating repetitive or unwieldy tasks. Newly designed crop-spraying solutions based on feedback from farmers allowed the tractor to perform crop maintenance functions that larger vehicles cannot while also reducing worker exposure to harmful chemicals. A rear-oriented PTO was installed to allow the vehicle to power external implements. A redesign of a stabilizing solution increased the vehicle’s versatility in managing various crops and transit between properties. The upgraded vehicle was tested in Massachusetts and validated by stakeholder surveys in India. Farmers from Massachusetts and from The Philippines who tested the vehicle responded positively. They indicated the tractor would be a valuable addition to their small farms and would substantially reduce drudgery. Testers believed the format of the vehicle was familiar, easy to learn, and comfortable to ride. This paper demonstrates that two-wheeled tractors are not only viable, but can produce the same utility as conventional tractor layouts at a significantly lower cost.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of a Handle Robot for Providing Bodily Support to Elderly Persons</title>
<link href="https://hdl.handle.net/1721.1/152870" rel="alternate"/>
<author>
<name>Bolli Jr., Roberto A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152870</id>
<updated>2023-11-03T03:49:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design and Optimization of a Handle Robot for Providing Bodily Support to Elderly Persons
Bolli Jr., Roberto A.
Age-related loss of mobility and an increased risk of falling remain major obstacles for&#13;
older adults to live independently. Many elderly people lack the coordination and strength&#13;
necessary to perform activities of daily living, such as getting out of bed or stepping into&#13;
a bathtub. A traditional solution is to install grab bars around the home. For assisting in&#13;
bathtub transitions, grab bars are fixed to a bathroom wall. However, they are often too far&#13;
to reach and stably support the user; the installation locations of grab bars are constrained&#13;
by the room layout and are often suboptimal. In this thesis, we present a mobile robot that&#13;
provides an older adult with a handlebar located anywhere in space - “Handle Anywhere”.&#13;
The robot consists of an omnidirectional mobile base attached to a repositionable handlebar.&#13;
We further develop a methodology to optimally place the handle to provide the maximum&#13;
support for the elderly user while performing common postural changes. A cost function&#13;
with a trade-off between mechanical advantage and manipulability of the user’s arm was&#13;
optimized in terms of the location of the handlebar relative to the user. The methodology&#13;
requires only a sagittal plane video of the elderly user performing the postural change, and&#13;
thus is rapid, scalable, and uniquely customizable to each user. A proof-of-concept prototype&#13;
was built, and the optimization algorithm for handle location was validated experimentally.&#13;
&#13;
Additionally, we present the results of a study to discover any correlations between an&#13;
elderly person’s preferred handlebar pose and various demographic indicators, self-rated&#13;
mobility for tasks requiring postural change, and biomechanical markers. For simplicity,&#13;
we considered only the case where the handlebar was positioned directly in front of the&#13;
user, as this confined the relevant body kinematics to a 2D sagittal plane. This data-driven&#13;
approach complements the cost function described earlier by assessing how a handlebar&#13;
should be positioned based on data from actual elderly people.&#13;
&#13;
Lastly, we introduce a novel design for a wheel capable of changing configuration based&#13;
on the surface underneath it, such that there will always be a high coefficient of friction&#13;
between the wheel and the ground. The wheel design was refined through experimental tests&#13;
on various floor surfaces commonly found in the homes of elderly people.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing Pattern and Anomaly Detection Methods in Influence Campaigns</title>
<link href="https://hdl.handle.net/1721.1/152868" rel="alternate"/>
<author>
<name>Mitchell, William B.</name>
</author>
<id>https://hdl.handle.net/1721.1/152868</id>
<updated>2023-11-03T03:16:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Developing Pattern and Anomaly Detection Methods in Influence Campaigns
Mitchell, William B.
Influence operations are a prominent psychological component of modern warfare. Recent historic events including the 2016 US election, the 2021 Myanmar Coup, and the Russian invasion of Ukraine in early 2022 have catapulted Department of Defense&#13;
(DoD) interest in modeling and predicting outcomes of military and political events, particularly in regions of strategic interest to the US. The MIT Lincoln Laboratory, Group 52, under contract with USTRANSCOM, has developed the Global Influence Model (GIM) to evaluate the information landscape at scale. This project seeks to expand on the previous work of Group 52 on GIM, incorporating pattern and anomaly detection methods. Several statistical and machine learning methods were applied to a data set of approximately 30,000 news articles from a 2-year period between August 2019 and August 2021. Statistical methods included moving average models and Singular Spectrum Analysis (SSA). Machine learning techniques included the use of an autoencoder and an LSTM neural network. These methods provide different ways to visualize and characterize the data. Together, the approaches offer a holistic picture of events in specific countries over a time period of interest. The figures generated by these techniques may be a useful tool for a military intelligence analyst. These products allow for the rapid visualization of large news article data sets that can help model influence campaigns as they unfold.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering Novel Microarchitectural Security&#13;
Vulnerabilities in Modern Processors</title>
<link href="https://hdl.handle.net/1721.1/152860" rel="alternate"/>
<author>
<name>Ravichandran, Joseph</name>
</author>
<id>https://hdl.handle.net/1721.1/152860</id>
<updated>2023-11-03T03:42:45Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Discovering Novel Microarchitectural Security&#13;
Vulnerabilities in Modern Processors
Ravichandran, Joseph
For decades, computer security issues such as viruses, worms, and Trojans have caused significant damages to computer systems across the world. Many of these security issues are caused by vulnerabilities in software allowing for memory corruption, a kind of attack where the contents of a computer’s memory are corrupted by an attacker to change a program’s behavior. While much research has been done on how to improve software security, vendors are increasingly turning to hardware defenses to compensate for software vulnerabilities. One such example is ARM Pointer Authentication, a security feature that enforces pointer integrity through the use of cryptographic hashes.&#13;
&#13;
I will introduce the PACMAN attack, a novel attack methodology that defeats Pointer Authentication by leveraging the behavior of the CPU’s microarchitecture. I will present multiple proof-of-concept attacks showing PACMAN defeating Pointer Authentication on the Apple M1 SoC, the world’s first desktop processor that supports Pointer Authentication. I will also document the tools I have created to perform detailed reverse engineering of the microarchitecture on Apple Silicon platforms, enabling both this work and future research.&#13;
&#13;
I will also present two memory corruption vulnerabilities I have discovered and reported in modern operating systems as case studies of the kind of software vulnerability Pointer Authentication tries to mitigate. The first is an uninitialized memory issue in Linux, and the second is a race condition leading to a type confusion in XNU. Finally, I will present a series of classroom exercises I have created to teach students about CPU vulnerabilities like PACMAN.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Growth for Whom? Sacrifice of Chicago’s Chinatown Then and Now</title>
<link href="https://hdl.handle.net/1721.1/152857" rel="alternate"/>
<author>
<name>Chen, Yu Jing</name>
</author>
<id>https://hdl.handle.net/1721.1/152857</id>
<updated>2023-11-03T04:00:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Growth for Whom? Sacrifice of Chicago’s Chinatown Then and Now
Chen, Yu Jing
The utilitarian planning practices that marked the era of Urban Renewal and the beautification efforts of the City Beautiful Movement led to widespread displacement and destruction of many Chinatowns across America, including Chicago’s. The history of Chicago’s Chinatown tells a story of a community that has always been an afterthought in city planning and priority, continually sidelined for the purposes of “broader” city goals. Recent decades, however, have brought upon a shift in planning paradigms, as values of equity and justice have become of increasing priority. This shift comes at a time when Chinatowns across the nation are experiencing change of their own as they face pressures of displacement largely due to downtown expansion. Chicago’s Chinatown, however, is an exception, largely regarded as America’s last growing Chinatown.&#13;
&#13;
Amidst these changing world contexts, this thesis strives to understand how Chicago today has actually evolved in how it values and centers Chinatown in its planning processes, particularly as the largest private development in Chicago history, The 78, is slated to become Chinatown’s neighbor. By examining through the lens of the 78 planning process, this thesis seeks to illuminate whether and how Chicago city planning has evolved from the sacrificial nature with which it has historically treated Chinatown during the periods of City Beautiful and Urban Renewal.&#13;
&#13;
This research relies on historical analysis of documents, maps, photographs, and more to understand the relationship between planning and Chinatown during the eras of City Beautiful and Urban Renewal. I then examined the 78 development process further than what was publicly reported by conducting a number of semi-structured interviews. Ultimately, I found that in many ways, the sacrificial nature of planning has not changed, although the way this sacrifice takes form is different. While economic interests for large-scale planning projects stay the same, social interests have evolved due to changing societal values. Today, the notion of diversity has become viewed as an amenity or asset, and as such, Chinatown’s function as a cultural center is capitalized upon despite ultimately still being subjected to sacrifice for the city’s economic advancement.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>VocalCords: Exploring Tactile Interaction and Performance with the Singing Voice</title>
<link href="https://hdl.handle.net/1721.1/152855" rel="alternate"/>
<author>
<name>Addae, Maxwell K.</name>
</author>
<id>https://hdl.handle.net/1721.1/152855</id>
<updated>2023-11-03T03:37:34Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">VocalCords: Exploring Tactile Interaction and Performance with the Singing Voice
Addae, Maxwell K.
The close relationship between touch, gesture, and sound plays a critical role in expressive musical performance. Many acoustic instruments, ranging from strings to brass to percussion, involve some coupling of the “feel” of the instrument in the hands and the corresponding sound produced. The singing voice, however, is one of few musical instruments that typically does not involve touch-mediated interaction. Despite several neurological, psychological, and social connections demonstrated between the hands and voice, the coupling of touch and voice is surprisingly absent from traditional vocal performance technologies. This provides the motivation for VocalCords, which explores the design of a new digital music interface inviting tactile interaction and performance with the singing voice. The interface makes use of physical rubber cords, acting as stretch sensors, which are pulled and manipulated by the hands of the singer as they vocalize to augment and modify their voice in real-time – as if they were able to physically “touch” their own vocal cords. This approach allows for expressive, tactile control over the singing voice, which suggests a striking relationship between physical and musical tension. Through a series of prototyping iterations and a public performance with the interface, I explore the potential of touch-mediated vocal performance, as well as how this added tactile interaction may alter our experience with, and perception of, our singing voices.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Permutation-based Significance Tests for Multi-modal Hierarchical Dirichlet Processes with Application to Audio-visual Data</title>
<link href="https://hdl.handle.net/1721.1/152853" rel="alternate"/>
<author>
<name>Anderson, Madeline Loui</name>
</author>
<id>https://hdl.handle.net/1721.1/152853</id>
<updated>2023-11-03T04:06:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Permutation-based Significance Tests for Multi-modal Hierarchical Dirichlet Processes with Application to Audio-visual Data
Anderson, Madeline Loui
Complex underlying distributions in multi-modal data motivate the need for data fusion methods that integrate observations of different modalities in a meaningful way. We explore the multi-modal hierarchical Dirichlet process (mmHDP) mixture model as a Bayesian non-parametric approach to data fusion. In particular, we elaborate on its censored-data perspective, which aligns groups of observations at a group level to accommodate for missing data in any modality. To explore the model behavior, we develop a processing pipeline that applies the mmHDP to audio-visual data, a common and practical multi-modal system. We apply this pipeline to musical data with known audio-visual relationships and provide in-depth qualitative analyses on the learned model parameters. Because of its non-parametric and unsupervised clustering nature, it can be difficult to quantify the significance of the learned mmHDP structure. We propose a novel permutation testing framework that empirically measures the significance of the mmHDP structure and demonstrate its viability using both synthetic and real audio-visual data. The results convey that the mmHDP model captures meaningful structure in the audio-visual data and that the permutation testing framework is a viable method for quantifying model significance.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tame Long-Horizon Model-Based Reinforcement&#13;
Learning</title>
<link href="https://hdl.handle.net/1721.1/152852" rel="alternate"/>
<author>
<name>Chen, Boyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/152852</id>
<updated>2023-11-03T03:39:23Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tame Long-Horizon Model-Based Reinforcement&#13;
Learning
Chen, Boyuan
Model-free reinforcement learning algorithms have exhibited great potential in solving single-task sequential decision-making problems with high-dimensional observations and long horizons, but are known to be hard to generalize across tasks. Model-based RL, on the other hand, learns task-agnostic models of the world that naturally enables transfer across different reward functions, but struggles to scale to complex environments due to the compounding error of applying a learned dynamics model iteratively. To get the best of both worlds, we propose a self-supervised reinforcement learning method that enables the transfer of behaviors across tasks with different rewards, while circumventing the challenges of model-based RL. In particular, we show self-supervised pre-training of model-free reinforcement learning with a number of neural-network random features as rewards allows implicit modeling of long-horizon environment dynamics. Then, planning techniques like model-predictive control using these implicit models enable fast adaptation to problems with new reward functions. Our method is self-supervised in that it can be trained on offline datasets without reward labels, but can then be quickly deployed on new tasks. We validate that our proposed method enables transfer across tasks on a variety of manipulation and locomotion domains in simulation, opening the door to generalist decision-making agents.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Structural Design and Analysis for a DeNOₓ Catalyst in Aviation</title>
<link href="https://hdl.handle.net/1721.1/152851" rel="alternate"/>
<author>
<name>Strauch, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152851</id>
<updated>2023-11-03T03:57:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Structural Design and Analysis for a DeNOₓ Catalyst in Aviation
Strauch, Michael
Nitrogen oxides, collectively known as NOₓ, are compounds that can cause health damage to humans, lead to smog production and acid rain, and cause the formation of ground-level ozone that is detrimental to life. NOₓ formation in aircraft engines occurs because of high-temperature reactions between the nitrogen and oxygen naturally present in the air and is a spontaneous and unintended pollutant. One technology that has proven effective in controlling NOₓ emissions in other industries is selective catalytic reduction (SCR). As a post-combustion emissions control (PCEC) exhaust treatment, the technology works by introducing a nitrogen-rich reductant into the exhaust stream, then passing the flow through a catalyst. This device facilitates reactions between the dosed exhaust flow and catalyst wall to create harmless N₂ and H₂O at the cost of lost engine efficiency due to adding back pressure to the engine system. In prior work, a “pleated filter” design of an SCR catalyst was proposed as a potential solution for reducing NOₓ in aviation. The work covered in this thesis describes the design and analysis approach of such a device to meet the dynamic loads encountered during flight. A multi-level structural finite element analysis (FEA) of both the honeycomb plates is needed to enable this technology and of the frame components that support these plates. Using a stiffness matrix approach, the honeycomb catalyst was simplified into equivalent panels that were used to analyze the catalyst overall structure. The overall additional weight from the structure necessary to support this novel catalyst is estimated to be between 80-90 kg, which is within the additional mass required that was estimated in the original work. This implies that this design is structurally feasible.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fueling an Energy Transition: Designing an Optimal&#13;
Portfolio of Competing Fuels Under Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/152850" rel="alternate"/>
<author>
<name>Abel, Samuel A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152850</id>
<updated>2023-11-03T03:39:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Fueling an Energy Transition: Designing an Optimal&#13;
Portfolio of Competing Fuels Under Uncertainty
Abel, Samuel A.
To facilitate the energy transition, firms must allocate their investment between incumbent and emerging fuel capacity. Understanding how to pace investment between competing energy options during this transition is crucial for energy companies and policymakers. Allocating investments among competing fuel technologies is complex due to uncertainty, improving costs of emerging fuels, market competition, and the delay between capacity investment and production.&#13;
&#13;
To address this complexity, we develop a stochastic dynamic optimization model incorporating dynamic decision-making, Nash-Cournot equilibrium between competing firms, and&#13;
uncertainty of competing fuel parameters, such as hydrogen demand and technology improvements. The model is also the first, to our knowledge, to include technology learning rates in dynamic optimization models for energy markets with firms in Cournot competition. Learning rates are a critical factor in assessing cost improvements and competitiveness of&#13;
emerging fuels.&#13;
&#13;
The model provides valuable insights for profit-driven firms and policymakers:&#13;
(1) Firms need to account for market structure and learning rates to optimize capital allocation&#13;
between fuels, as neglecting these factors can lead to sub-optimal immediate capacity investment decisions, and regret, measured as sub-optimal private gain.&#13;
(2) Incorporating stochastic modeling is also required for firms. We show that deterministic models lead to sub-optimal capacity investment decisions and increasing profit regret as the&#13;
uncertainty range increases.&#13;
We observe that learning rates can be complementary with carbon taxes and competition,&#13;
which has implications for policymakers:&#13;
(3) Encouraging market participation reduces fuel costs through learning. This increases investment in the emerging fuel by a greater amount than improved competition alone.&#13;
(4) Early implementation of carbon taxes can encourage capacity investment and production. We observe that under certain circumstances, with a sufficient learning rate, this early&#13;
implementation can reduce the need for stricter future taxes.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Idea of Heritage in Nineteenth-Century Iran: Nādir Mīrzā’s Account on Tabriz</title>
<link href="https://hdl.handle.net/1721.1/152848" rel="alternate"/>
<author>
<name>Moossavi, Boshra</name>
</author>
<id>https://hdl.handle.net/1721.1/152848</id>
<updated>2023-11-03T03:01:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Idea of Heritage in Nineteenth-Century Iran: Nādir Mīrzā’s Account on Tabriz
Moossavi, Boshra
This thesis focuses on the concept of heritage and its preservation in nineteenth-century Iran through the perspective of Prince Nādir Mīrzā Qajar (1827-1888/9). While heritage and preservation have been extensively studied in the Euro-American context, little attention has been given to their meanings in the non-Western discourses, particularly in Iran. The few studies on Iran focuses on the institutionalization of heritage and the Western influences on it by Europeans and Iranian reformists. This research seeks to provide a fresh perspective on the idea of heritage for non-reformist groups of people who were rooted in the religious and cultural traditions of Iran. To this end, Nādir Mīrzā, as a Qajar prince whose writing reflects traditional Iranian patterns of thought, was selected for this study. By an in-depth investigation of Nādir Mīrzā’s The History and Geography of Tabriz, I shed light on the difference between Nādir Mīrzā’s understanding of architecture and what later was promoted as heritage by the Society for National Heritage in the twentieth century. The manuscript belongs to the antiquarian category of texts that focus on history and geography in tribute to rulers and princes. However, unlike other works of this genre that mainly consist of chronicles and descriptions, I would contend that The History and Geography of Tabriz offers insight into a broader era of decay of the traditional built environment in Qajar Iran. Moreover, the city of Tabriz, situated near the Ottoman Empire and inhabited predominantly by Azari speakers, is significant from a strategic and ethnic point of view.&#13;
&#13;
To this end, I examine Nādir Mīrzā’s background, including his family lineage, education, and writing style, to understand how his understanding of heritage was shaped. Then I investigate how Nādir Mīrzā’s writing is a form of heritage as it attempts to preserve certain aspects of history. Therefore, how religion, class, linguistic, and ethnic identities influenced his choice for historicizing the past. Then, following a brief mention to the reformists’ values on heritage, I uncover Nādir Mīrzā’s specific values by analyzing his accounts of buildings. Finally, I investigate the role of those values in preservation and maintenance of buildings through extracting the reasons for construction, repair, and destruction from Nādir Mīrzā’s accounts. The conclusion proposes further investigation into other sources to complete the narrative of non-European understanding of heritage in nineteenth-century Iran.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The power-gas demand impacts and regulatory implications for the future of gas systems under the electrification of space heating in cold climates</title>
<link href="https://hdl.handle.net/1721.1/152843" rel="alternate"/>
<author>
<name>Santoni-Colvin, Morgan</name>
</author>
<id>https://hdl.handle.net/1721.1/152843</id>
<updated>2023-11-03T03:33:57Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The power-gas demand impacts and regulatory implications for the future of gas systems under the electrification of space heating in cold climates
Santoni-Colvin, Morgan
The call for action to mitigate GHG emissions necessitates the decarbonization of the building sector. The electrification of heating, especially via efficient air-source heat pumps coupled with a low-carbon electricity grid, is considered an attractive option for displacing emissions from fossil-fueled heating systems. While the opportunity for decarbonization is high in emission-intensive housing stocks such as that of the U.S. New England region, the high demand for heating in cold climates elicits concerns about energy demand impacts. Furthermore, there is concern about what electrification and the broader call for decarbonization might imply for gas distribution systems, which will face declining usage and most likely infrastructural retirement.&#13;
&#13;
First, this thesis develops a bottom-up building energy modeling framework to quantify the hourly power and gas demand impacts of the electrification of residential heating in New England under a range of electrification and weather scenarios for 2050. We find that deep electrification greatly diminishes gas demand and increases electricity demand, with a potentially drastic increase in peak electricity demand given current technologies. Furthermore, the weather-induced variation in peak demand becomes more drastic. These adverse demand impacts can be mitigated by envelope improvements and motivate the implementation of demand-side flexibility, but the effectiveness of these measures may be limited by long peak demand durations. However, the adverse demand impacts of deep electrification must be weighed against the downsides of less-aggressive electrification, which might actually result in worse demand impacts in the long term. Second, we compare the current future gas system planning frameworks of Massachusetts regulators against other states, finding that policymakers in Massachusetts must address several issues in order to prepare for the transformative effect that electrification will have on gas distribution systems. Resulting recommendations highlight the need for continuous long-term gas planning procedures, legal reform of the consumer right to gas service, a cautious approach towards considering alternative fuels as a mechanism for gas system decarbonization, and prioritization of equity in allocation of the costs of gas system retirement.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Battery-Electric-Bus Transit System Design</title>
<link href="https://hdl.handle.net/1721.1/152840" rel="alternate"/>
<author>
<name>Besa Lehmann, Jorge Andrés</name>
</author>
<id>https://hdl.handle.net/1721.1/152840</id>
<updated>2023-11-03T03:20:19Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Battery-Electric-Bus Transit System Design
Besa Lehmann, Jorge Andrés
The increasing availability of battery-electric buses (e-buses) as a sustainable alternative for public transportation has sparked considerable interest in recent years. With a notable decrease in lithium-ion battery prices, e-buses have become a competitive option in terms of the total cost of ownership when compared to diesel buses. This trend is driven by a growing awareness of the environmental impact of the transportation sector, which accounts for a significant portion of global CO2 emissions. Transit Authorities at the forefront of this transition are facing important challenges to scaling up their operations, starting with the selection of their charging infrastructure and battery-electric-bus equipment. The problem is generally approached as a cost optimization problem that fails to represent system uncertainties comprehensively, undermining the capacity of solutions to guarantee high service levels to the public. This research contributes to this regard by analyzing and comparing eight infrastructure and equipment scenarios from cost and service level perspectives, using the city of Chicago as a case study. First, an e-VSP is solved for each charging configuration to find efficient robust schedules that can withstand the uncertainties of travel time and energy demand. Then, each scenario undergoes a single-charger failure simulation to assess the operational impact of energy supply disruptions. The simulation quantifies the daily number of buses at risk of breakdown (i.e., depleted battery) as a proxy for service level degradation. Finally, the life-cycle costs of each scenario are calculated according to their infrastructure and scheduled operation and compared along the reported bus breakdowns at failure. The study finds that charging configurations favoring the concentration of power capacity (i.e., chargers at depot only) can better withstand operational uncertainties when compared to decentralized charging configurations that favor network coverage (i.e. on-route charging). The failure assessment corroborates this finding by reporting a critical degradation of service levels (i.e. multiple trip cancellations) on charging networks presenting single-charger charging stops. Ultimately, this research concludes that the selection of the charging configuration will depend on the transit agency budget and risk profile, since the higher reliability provided by the centralization of power capacity comes at a higher life-cycle cost, even when accounting for the effects of innovation in battery technology.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long Sequence Transformer Variants on Varying Context Length</title>
<link href="https://hdl.handle.net/1721.1/152839" rel="alternate"/>
<author>
<name>Sun, Melinda</name>
</author>
<id>https://hdl.handle.net/1721.1/152839</id>
<updated>2023-11-03T03:45:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Long Sequence Transformer Variants on Varying Context Length
Sun, Melinda
Transformers are powerful and effective tools in natural language processing, but their scalability is limited by the quadratic complexity of attention. Several transformer variants that address this problem have recently been proposed, including Moving Average Equipped Gated Attention (Mega). In this thesis, we evaluate how effectively Mega uses past context, by comparing the perplexity trend as context length varies with the perplexity trend of a standard transformer. We find that Mega does not show greater benefit from longer context in a Wikipedia or book setting, though it does have a much better ability to extrapolate beyond training context lengths.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Building a Language Conditioned System for 6-DoF Tabletop Manipulation</title>
<link href="https://hdl.handle.net/1721.1/152838" rel="alternate"/>
<author>
<name>Parakh, Meenal</name>
</author>
<id>https://hdl.handle.net/1721.1/152838</id>
<updated>2023-11-03T03:49:02Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Building a Language Conditioned System for 6-DoF Tabletop Manipulation
Parakh, Meenal
We present a full-stack modular system for solving tabletop manipulation tasks from natural language task descriptions. The tasks that the system can perform include everyday pick-place tasks, such as sorting, or rearrangement, and the ability to learn new skills. The system primarily consists of three components: perception, planning, and execution, each of which exploits the recent advancements in large machinelearning models developed for particular tasks. The three components interact with each other through carefully designed interfaces which are also crucial contributions of this work. We further evaluate different parts of the system, belonging to perception and execution, as well as showcase performance on some examples tasks, both in real and in sim. The main advantage of a modular system is that no training data is required to either train an end-to-end model or for finetuning. Further, the recent advancements in large models such as Segment Anything and GPT-4 made it possible to construct a modular system, that incorporates vast common sense knowledge, as opposed to traditional approaches. These large models have been trained on billions of data points, and internet-scale data, allowing for zero-shot applications in our system and no need for large-scale data collection. Building such modular systems has the potential to minimize the labor and time spent in the data collection step in robotics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scene Perception for Simulated Intuitive Physics via Bayesian Inverse Graphics</title>
<link href="https://hdl.handle.net/1721.1/152837" rel="alternate"/>
<author>
<name>Shehada, Khaled K.</name>
</author>
<id>https://hdl.handle.net/1721.1/152837</id>
<updated>2023-11-03T03:02:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Scene Perception for Simulated Intuitive Physics via Bayesian Inverse Graphics
Shehada, Khaled K.
Humans have a wide range of cognitive capacities that make us adept at interpreting our physical world. Every day, we encounter new environments, yet we can parse those environments with limited visual exposure and make fairly accurate inferences about unfamiliar objects. Emulating scene understanding capacities in computational models has numerous applications ranging from autonomous driving to virtual reality. Despite the proficiency demonstrated by deep neural networks in pattern recognition, recent works have uncovered challenges in their abilities to encode prior physical knowledge, form visual concepts, and perform compositional reasoning, such as inferring inter-object relations like containment. To this end, the thesis introduces the Simulated COgnitive Tasks (SCOT) benchmark, a large-scale synthetic dataset and data creation codebase allowing for the procedural generation of videos of simulated cognitive tasks targeting intuitive physics understanding. Those cognitive tasks are adapted from tests in the literature used to comparatively assess the cognitive capacities of non-human primates. Additionally, the thesis presents an analysis of several deep learning models on the benchmark, underlining their limitations in tasks involving object permanence comprehension, quantities, and compositionality and their inability to generalize learned knowledge to complex dynamic scenes. In response to these limitations, we propose a probabilistic generative approach that leverages Bayesian inverse graphics to learn structured scene representations that facilitate learning new objects and tracking objects in dynamic scenes. Our evaluation of this model on SCOT revealed near-perfect performance on most tasks with significant data efficiency, suggesting that structured representations and symbolic inference can cooperate with deep learning methods to interpret complex 3D scenes accurately. Overall, this thesis contributes to the field of artificial intelligence (AI) by presenting a new method for improving scene understanding in AI models and providing a benchmark for assessing the visual cognitive capacities of computational models.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visualization and Behavioral Testing Of Common&#13;
Sense Generative Programs</title>
<link href="https://hdl.handle.net/1721.1/152834" rel="alternate"/>
<author>
<name>Chuang, Keenly Simon</name>
</author>
<id>https://hdl.handle.net/1721.1/152834</id>
<updated>2023-11-03T04:07:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Visualization and Behavioral Testing Of Common&#13;
Sense Generative Programs
Chuang, Keenly Simon
Probabilistic generative programs are powerful tools that allow for modeling complex 3D worlds containing objects and agents. Recent advances in these programs have resulted in creation of rich models whose traces represent 3D scenes, but there exist challenges in using visualizations and simulation tools for practical implementations. In this thesis, I describe the development of infrastructure to accelerate research in this area. Specifically, I present a pipeline for synthetic data generation with physics simulation capabilities and a suite of rendering options. By leveraging existing scene graph generators and multiple visualization engines, photorealistic datasets can be produced to evaluate probabilistic generative programs and create stimuli for gathering information on human behavior. This framework allows fine-grained temporal tracking of object poses and velocities, both with and without occlusion, facilitating the collection of rich human behavioral data on dynamic object tracking. More broadly, the tools developed here provide visualization, debugging capabilities, and configurable synthetic datasets to benchmark future progress in 3D scene understanding. Development of this infrastructure is an investment in improved synthetic data generation and analysis frameworks is an important step toward robust probabilistic generative programs for 3D world modeling.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Illuminate</title>
<link href="https://hdl.handle.net/1721.1/152832" rel="alternate"/>
<author>
<name>Cocking, Chelsi Alise</name>
</author>
<id>https://hdl.handle.net/1721.1/152832</id>
<updated>2023-11-03T04:07:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Illuminate
Cocking, Chelsi Alise
What would it be like if we could see our movement?&#13;
&#13;
This thesis presents, Illuminate, an interactive art installation in which the movements of a person through open space are visually augmented and brought to life in front of them in real-time through custom interactive visualization software. Seamlessly merging physical and digital space, Illuminate submerges a participant into an artificial reality in which their usually unseen paths of movement become visible. Aiming to give the participant a visceral yet magical moment in which they can see, interact with, and play with their once invisible wakes of motion— pushing the boundaries of our senses and making the invisible visible. The project also explores the themes of spatial computing, bodily expression, abstraction, and choreographic interfaces. &#13;
&#13;
Illuminate provides a deeper understanding of bodily motion to a general audience through a playful interactive performance space made for human creativity, expression, and public play. Investigating the poetic implications of making the invisible trails of our human movement visible. It explores the relationship between our bodies' movement, time, space, and the digital world, provoking questions regarding the possible implications of a world in which we can more casually and effortlessly control and interact with digital elements spatially through the free unrestricted movement of our bodies. 
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wall-modeled Large-eddy Simulation Based on Building-block Flows</title>
<link href="https://hdl.handle.net/1721.1/152829" rel="alternate"/>
<author>
<name>Ling, Yuenong</name>
</author>
<id>https://hdl.handle.net/1721.1/152829</id>
<updated>2023-11-03T04:00:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Wall-modeled Large-eddy Simulation Based on Building-block Flows
Ling, Yuenong
A unified subgrid-scale (SGS) and wall model—the building-block flow model (BFM) — for wall modeled large-eddy simulation (WMLES) is proposed by devising the flow as a collection of building blocks that enables the prediction of the eddy viscosity. The core assumption of the model is that simple canonical flows contain the essential physics to provide accurate predictions of the SGS tensor in more complex flows. The model is constructed to predict zero-pressure-gradient wall-bounded turbulence, adverse/ favorable pressure gradient effects, separation and laminar flow. The approach is implemented using a Bayesian classifier, which identifies the contribution of each building block in the flow, and a neural-network-based predictor, which estimates the eddy viscosity based on the building-block units. The training data are directly obtained from wall-modeled LES with an exact SGS/wall model for the mean quantities to guarantee consistency with the numerical discretization. The model is validated in canonical flows, the NASA High-Lift Common Research Model and a Gaussian bump and shown to improve the predictions with respect to current modeling approaches. The modular extensibility of the BFM paradigm will allow for future improvements by incorporating additional physics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Weakly Supervised Representation Learning for Trauma&#13;
Injury Pattern Discovery</title>
<link href="https://hdl.handle.net/1721.1/152826" rel="alternate"/>
<author>
<name>Jin, Qixuan</name>
</author>
<id>https://hdl.handle.net/1721.1/152826</id>
<updated>2023-11-03T03:18:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Weakly Supervised Representation Learning for Trauma&#13;
Injury Pattern Discovery
Jin, Qixuan
Given the complexity of trauma presentations, particularly in those involving multiple areas of the body, overlooked injuries are common during the initial assessment by a clinician. We are motivated to develop an automated trauma pattern discovery framework for comprehensive identification of injury patterns which may eventually support diagnostic decision-making. We analyze 1,162,399 patients from the Trauma Quality Improvement Program with a disentangled variational autoencoder, weakly supervised by a latent-space classifier of auxiliary features. We also develop a novel scoring metric that serves as a proxy for clinical intuition in extracting clusters with clinically meaningful injury patterns. We validate the extracted clusters with clinical experts, and explore the patient characteristics of selected groupings. Our metric is able to perform model selection and effectively filter clusters for clinically-validated relevance.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Lagrangian, Discontinuous-Galerkin Material Response Solver for the Analysis of Ablative Thermal Protection Systems</title>
<link href="https://hdl.handle.net/1721.1/152825" rel="alternate"/>
<author>
<name>Quinn, Christopher</name>
</author>
<id>https://hdl.handle.net/1721.1/152825</id>
<updated>2023-11-03T03:15:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Lagrangian, Discontinuous-Galerkin Material Response Solver for the Analysis of Ablative Thermal Protection Systems
Quinn, Christopher
Thermal protection systems (TPS) play a vital role in safeguarding aerospace vehicles from the intense aerodynamic heating encountered during hypersonic flight. One category of TPS materials manages extreme heat through pyrolysis, a process in which the elevated temperatures trigger an endothermic reaction that decomposes the material into char and gases, and through thermochemical ablation, in which char and pyrolysis gases blow away from the surface. Their use is common in high-velocity hypersonic missions through dense atmospheres, in contrast to other materials such as reusable TPS, which are often used in lower heat flux scenarios. Analysis of ablative TPS materials is challenging due to their complex material response which involves a combination of thermal, chemical, and mechanical phenomena.&#13;
&#13;
A major concern in hypersonic vehicle design is the catastrophic failure of the TPS. It is necessary to anticipate scenarios in which excessive ablation, inelastic deformation, or fracturing of the TPS occurs. A successful TPS design should account for these failure modes while balancing concerns about cost, weight, and vehicle performance.&#13;
&#13;
Computational modeling has emerged as an important tool in TPS design, and in predicting the behavior of TPS materials including failure. Existing codes are capable of modeling the thermo-chemical response of ablative TPS and predicting some modes of failure, but they are often limited in their ability to model mechanical deformation and damage.&#13;
&#13;
This thesis proposes a new computational framework for modeling the thermo-chemomechanical behavior of ablative TPS materials to address this gap. The modeling approach is based on a Lagrangian, Discontinuous-Galerkin finite element formulation of the coupled multiphysics problem, which includes models of finite elastic and inelastic deformation as well as damage, pyrolysis reactions, heat, and mass transfer. The numerical solution employs a semi-implicit time integration scheme for the nonlinear heat and mass transfer problems, while the solid mechanics is addressed using dynamic relaxation. Importantly, a mesh recession algorithm is implemented to explicitly account for changes in geometry due to material ablation. A staggered iteration scheme is used to couple the multiphysics problem.&#13;
&#13;
Several numerical examples demonstrating the correctness and versatility of the proposed method are presented. These include verification against several analytical solutions to the heat equation and benchmark problems utilized in the ablation modeling community. The mesh recession algorithm is also verified through a series of numerical tests known as patch tests. Finally, a demonstration of an arc-jet experiment of phenolic-impregnated carbon ablator (PICA) is presented to illustrate the computational framework’s ability to model thermo-chemically induced deformation, stresses, and surface recession in pyrolyzing TPS materials.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems Approach to Understanding Organizational Structure and Employee Development in Tech Sector</title>
<link href="https://hdl.handle.net/1721.1/152821" rel="alternate"/>
<author>
<name>Yang, Bryan</name>
</author>
<id>https://hdl.handle.net/1721.1/152821</id>
<updated>2023-11-03T03:56:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Systems Approach to Understanding Organizational Structure and Employee Development in Tech Sector
Yang, Bryan
The technology industry holds a distinctive position due to its relentless pursuit of rapid innovation, necessitating substantial investments in research and development. As organizations seek to thrive in this constantly evolving and highly competitive environment, the modern business landscape presents formidable challenges, demanding companies to remain agile and excel in their respective industries. In response to these challenges, organizations create structures to drive efficiency and scalability, serving as a solid foundation to function smoothly, adapt to changing circumstances, and achieve their missions and visions.&#13;
&#13;
Organizational structures play a fundamental role in the success and growth of companies, providing the necessary framework to define roles and responsibilities, allocate resources, harness the collective efforts of the workforce, and drive toward sustainable growth. As such, the organizational structure directly impacts the nature of work that individuals are involved in and the array of opportunities that align with their career aspirations, which can impede or accelerate their growth potential.&#13;
&#13;
This thesis explores the intricacies of organizational structure within the technology sector through a literature review and a series of semi-structured interviews. By examining the specific needs and challenges faced in structuring organizations, this thesis analyzes the essential elements that contribute to employee development. Utilizing the critical enterprise elements from the ARIES framework, this thesis uses a systems approach to enrich the understanding of different organizational structures in fostering employee development and growth.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mitigating Investment Risks in Nature-Based Solutions: A Strategic Approach Towards Sustainable Project Implementation</title>
<link href="https://hdl.handle.net/1721.1/152820" rel="alternate"/>
<author>
<name>Zhang, Zhao</name>
</author>
<id>https://hdl.handle.net/1721.1/152820</id>
<updated>2023-11-03T03:43:42Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Mitigating Investment Risks in Nature-Based Solutions: A Strategic Approach Towards Sustainable Project Implementation
Zhang, Zhao
Reforestation plays a crucial role in combating global warming while promoting biodiversity conservation and restoring ecosystems. However, poorly planned reforestation efforts can lead to increased emissions and long-term damage to landscapes, biodiversity, and livelihoods. In addition, low carbon prices in the voluntary market further hinder reforestation project viability. Therefore, a comprehensive understanding of the scientific and economic aspects of reforestation is essential to ensure effective and sustainable implementation, especially considering the increasing number of institutions and companies that rely on reforestation to achieve ambitious environmental goals. &#13;
&#13;
This thesis focuses on the strategic challenge of developing large-scale investments in reforestation, considering both scientific and economic perspectives. It first applies analytical workflows to explore the relationship between changes in soil carbon and above-ground biomass following plantation by utilizing a diverse set of measurements across multiple interrelated sites. The resulting estimates provide insights and decision-making tools to guide investment choices concerning reforestation location, species selection, and project types from the scientific perspective. The second strategy showcases the application of engineering design flexibility through a case study on an existing reforestation project, demonstrating the benefits of adopting a progressive investment approach with scale optionality.This approach proves advantageous, particularly when dealing with policy and commercial uncertainties that influence reforestation development. Monte Carlo simulation and multi-dimensional project evaluations were implemented to investigate a range of potential scenarios and assess their implications. By integrating these scientific findings, this research contributes to an enhanced understanding of optimizing reforestation investments in a manner that aligns with scientific principles and economic considerations. This holistic approach, incorporating engineering design flexibility and robust evaluations of project dynamics, offers insights and practical guidance for stakeholders to make informed decisions and achieve optimal project outcomes.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Models for Domain-Specific Summarization</title>
<link href="https://hdl.handle.net/1721.1/152819" rel="alternate"/>
<author>
<name>Queipo, Laura</name>
</author>
<id>https://hdl.handle.net/1721.1/152819</id>
<updated>2023-11-03T03:32:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Generative Models for Domain-Specific Summarization
Queipo, Laura
This project evaluates the performance of generative models of summarization in aviation safety domain. Models such as DaVinci, Text-DaVinci-003, and GPT-3.5-Turbo were analyzed in both their zero-shot learning and fine-tuned performance against state-of-the-art models. In zero-shot learning, generative models were superior in most cases to the state-of-the- art models, whereas the fine-tuned models could learn with less information about the dataset. These results predict promising advances in the summarization space to address current limitations in the field.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficiently Learning Robust, Adaptive Controllers from Robust Tube MPC</title>
<link href="https://hdl.handle.net/1721.1/152818" rel="alternate"/>
<author>
<name>Zhao, Tong</name>
</author>
<id>https://hdl.handle.net/1721.1/152818</id>
<updated>2023-11-03T03:37:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Efficiently Learning Robust, Adaptive Controllers from Robust Tube MPC
Zhao, Tong
The deployment of agile autonomous systems in challenging, unstructured environments requires adaptation capabilities and robustness to uncertainties. Existing robust and adaptive controllers, such as those based on model predictive control (MPC), can achieve impressive performance at the cost of heavy online onboard computations. Strategies that efficiently learn robust and onboard-deployable policies from MPC have emerged, but they still lack fundamental adaptation capabilities. In this work, we extend an existing efficient Imitation Learning (IL) algorithm for robust policy learning from MPC with the ability to learn policies that adapt to challenging model/environment uncertainties. The key idea of our approach consists of modifying the IL procedure by conditioning the policy on a learned lower-dimensional model/environment representation that can be efficiently estimated online. We tailor our approach to learning an adaptive position and attitude control policy to track trajectories under challenging disturbances on a multirotor. Our evaluation shows that a high-quality adaptive policy can be obtained in about 1.3 hours of combined demonstration and training time. We empirically demonstrate rapid adaptation to in- and out-of-training-distribution uncertainties, achieving a 6.1 cm average position error under wind disturbances that correspond to 50% of the weight of the robot, and that are 36% larger than the maximum wind seen during training. Additionally, we verify the performance of our controller during real-world deployment in multiple trajectories, demonstrating adaptation to turbulent winds of up to 5.2 m/s and slung loads of up to 40% of the robot’s mass, and reducing the average position error on each trajectory to under 15 cm, a 70% improvement compared to a non-adaptive baseline.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Implementation of a Distributed Executive</title>
<link href="https://hdl.handle.net/1721.1/152817" rel="alternate"/>
<author>
<name>Romero, Sabrina</name>
</author>
<id>https://hdl.handle.net/1721.1/152817</id>
<updated>2023-11-03T03:32:45Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design and Implementation of a Distributed Executive
Romero, Sabrina
The deployment of autonomous robots has the potential to revolutionize high-risk missions, from rescue operations and outer-space exploration to the maintenance of underwater infrastructure. In many of these scenarios, such as the routine maintenance of a distant space station, collaboration between multiple robots becomes essential. However, the vastness of space or the depths of the oceans often impose communication constraints, making realtime coordination challenging. While centralized control of these missions is traditional and straightforward to implement, it often becomes impractical in these contexts because of communication delays and uncertainties. Given these challenges, a distributed approach is not just preferred but necessary, ensuring robots can operate independently when under limited communication conditions. Traditional strategies to coordinate multiple robots’ schedules when communication is unreliable have been conservative, as they tend to favor prefixing the time for their actions, creating rigid and non-robust schedules. Such schedules can allocate excessive time to tasks as a safety measure, leaving potential resources underutilized. This over-caution not only results in inefficient execution but can also prevent the executive from identifying viable schedules for missions, even when they exist with a more flexible approach. The lack of adaptability, especially in the face of unexpected challenges, undermines the executive’s robustness. To address these shortcomings, our aim is to craft a flexible and robust distributed executive adept at planning, scheduling, and executing multi-agent scenarios. We build upon the Kirk executive, a creation of the MERS group at CSAIL, enabling it to proficiently manage multi-agent scenarios without a guarantee of perfect communication during execution. Central to our methodology is the principle of temporal decoupling which allows agents to decouple any inter-dependencies in their schedule and operate independently. We integrate the state of the art algorithm in temporal decoupling, which decouples as necessary, leaving room for communication when it is available. This integration not only enhances the autonomy of the agents but also ensures they can leverage the benefits of communication when it is available, striking a balance between independence and collaborative efficiency. Building on this foundation, our work offers a practical perspective on autonomous robot coordination. By enhancing the Kirk executive with a temporal decoupling algorithm, expanding the Reactive Model-based Programming Language (RMPL) for multi-agent scenario representation, and showcasing Kirk’s improved capability in multi-agent scenarios with communication constraints, we bridge the gap between theoretical foundations and practical applications.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting Trust: Building Secure and High-Performance Confidential VMs</title>
<link href="https://hdl.handle.net/1721.1/152816" rel="alternate"/>
<author>
<name>Srivastava, Shashvat</name>
</author>
<id>https://hdl.handle.net/1721.1/152816</id>
<updated>2023-11-03T03:58:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Architecting Trust: Building Secure and High-Performance Confidential VMs
Srivastava, Shashvat
Recent research in TEE (Trusted Execution Environment) design have focused on the development of confidential VMs — virtual machines completely protected by secure hardware. All major CPU vendors have rolled out support for VM based TEEs — AMD created SEV (2017), Intel created TDX (2020), and ARM launched CCA (2021). Confidential VMs are a quite promising new technology as they are significantly more user-friendly, allow existing applications to run without modifications, and have better performance compared to process-based TEE. However, confidential VMs still face two large design challenges: security and performance. In the first part of this thesis, we propose a secure confidential VM design on the RISC-V platform, which currently has no official confidential VM support. We specifically focus on the task of secure CPU virtualization and build a security monitor that hides the virtual CPU register state from the hypervisor during context switches. To allow the hypervisor to properly handle interrupts and emulate instructions, we summarize a specification listing which registers need to be exposed in specific scenarios. In the second part of this thesis, we aim to improve the network I/O performance of existing confidential VMs. The hardware protections of TEEs create additional I/O overhead in confidential VMs, and Trusted I/O (TIO) is a promising solution to reduce this overhead. However, TIO has several drawbacks — it relies on hardware support from the I/O device and expands the Trusted Computing Base (TCB) to include these TIO devices. Furthermore, TIO devices will not be commercially available for several years. We aim to create a I/O solution that can reach the performance of TIO without relying on TIO devices. In particular, we present Folio, a system for high-performance network I/O compatible with AMD SEV-SNP. Compared to network I/O in a non-TEE VM, Folio performs only a single extra memory-copy of packet data. Our extensive evaluation shows that Folio performs only 6% worse than the ideal TIO solution.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Serialization and Applications for the Gen Probabilistic Programming Language</title>
<link href="https://hdl.handle.net/1721.1/152815" rel="alternate"/>
<author>
<name>Limarta, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/152815</id>
<updated>2023-11-03T03:47:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Serialization and Applications for the Gen Probabilistic Programming Language
Limarta, Ian
Probabilistic programming has emerged as a powerful framework for building expressive models that can handle uncertainty in a wide range of applications. Serialization, the process of converting data structures or objects into a format suitable for storage or transmission, plays a crucial role in the development and execution of probabilistic programs. Efficient serialization techniques are essential for tasks such as data persistence, distributed computation, and data exchange between different programs or machines. We delve into specific challenges and considerations unique to probabilistic programming for serialization. Probabilistic models often involve complex structures, including nested random variables, hierarchical dependencies, and potentially infinite or unbounded dimensions. Serializing samples from such models requires careful handling of these complexities, including strategies for preserving model fidelity, dealing with modeling dependencies, and specializing for disk representations. In this thesis, we discuss twofold objectives for the Gen probabilistic programming model. The first establishes a formalism for serializing (and deserializing) traces as an interface that respects the existing Gen interfaces and faithfully reconstructs data objects from disk. We highlight challenges for efficient serialization for Gen’s DSLs. The second objective is to show how serialization routines common in other areas of computing transfer well to Gen. We show how serialization provides easiers means for visualizations, remote computing, and training inference approximators.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Bi-Directional Converter Module for&#13;
Battery Cell Voltage Charge Cycling</title>
<link href="https://hdl.handle.net/1721.1/152814" rel="alternate"/>
<author>
<name>Gonzalez, Rolando A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152814</id>
<updated>2023-11-03T03:42:59Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design of a Bi-Directional Converter Module for&#13;
Battery Cell Voltage Charge Cycling
Gonzalez, Rolando A.
A framework for testing and controlling a bidirectional DC to DC converter is proposed that can be used for battery cell cycle testing. The circuit allows for the shuttle of energy in both directions for a battery cell under test, enabling functions such as monitoring the deterioration of battery cells’ capacity across discharge/charge cycles. This thesis includes the design, fabrication, and testing this circuit to validate and characterize its utility. Additional code was written to quickly provide feedback on the circuit’s performance and control the circuit’s operating point. This thesis builds off previous work done on an inductive cell-balancer circuit topology[3], while tweaking the topology and adding features that lend towards improved utility in settings where battery monitoring and characterization are important, such as a laboratory.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Integration of an Underactuated Robotic&#13;
Finger with Vision-based Tactile Sensing</title>
<link href="https://hdl.handle.net/1721.1/152811" rel="alternate"/>
<author>
<name>Ma, Yuxiang</name>
</author>
<id>https://hdl.handle.net/1721.1/152811</id>
<updated>2023-11-03T03:49:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design and Integration of an Underactuated Robotic&#13;
Finger with Vision-based Tactile Sensing
Ma, Yuxiang
Underactuated fingers are adaptable to different shapes, robust, and cost-effective for executing sturdy and versatile grasps. However, they generally have limited control or require complex planning when performing tasks that require high precision or delicate handling. Vision-based tactile sensors, like GelSight, can mitigate these control issues by adding real-time proprioception and also provide useful high-resolution tactile information, which can enhance underactuated fingers with shape and texture perception. As such, this work presents the development of a compact, underactuated linkage finger and its integration with a low-cost, simple vision-based tactile sensor, i.e. the Gelsight. &#13;
&#13;
Through the process of developing the tactile, linkage fingers, we established a planar linkage mechanism simulator and a simple 2D ray-tracing optical simulator to help optimize the linkage transmission and improve tactile sensing performance. In total, the finger went through three major designs. In the initial iteration, we designed sliding joints, which were replaced in the second iteration by linkage mechanisms to make the design more compact and robust. A planar linkage simulator was used to optimize the trajectory to avoid collision and increase range of motion. In the current iteration, the finger has evolved from having two segments to having three segments, with underactuation incorporated to further reduce the number of motors. Each finger segment houses a silicone gel pad, whose tactile imprints are captured by mirrors, which are then observed by a single camera placed at the second finger segment. The camera and mirrors are positioned based on the results of a simple raytracing simulator, which guaranteed that each finger segment could be visible in all finger configurations.&#13;
&#13;
The use of mirrors, linkage transmission and underactuation makes the mechanism compact, efficient, and less complex by reducing the number of cameras and motors needed. Moreover, the integration of vision-based sensors allows these underactuated fingers to perceive contact information and finger configuration. In conclusion, this work encapsulates the innovative design and integration of an underactuated linkage finger with vision-based tactile sensing, offering compactness, adaptability, and robustness in grasping tasks. Additionally, the integration of vision-based tactile sensors can significantly enhance the capabilities of underactuated fingers by providing them with high resolution images and proprioception information, and potentially broaden the future usage of underactuated fingers.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hydrogel Design Optimization for Measuring&#13;
Ultrasound Using Laser Doppler Vibrometry</title>
<link href="https://hdl.handle.net/1721.1/152810" rel="alternate"/>
<author>
<name>Caraballo-Justiniano, Eugenio</name>
</author>
<id>https://hdl.handle.net/1721.1/152810</id>
<updated>2023-11-03T03:56:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Hydrogel Design Optimization for Measuring&#13;
Ultrasound Using Laser Doppler Vibrometry
Caraballo-Justiniano, Eugenio
Over the past decade, work in the medical field has been geared towards the development&#13;
of ultrasonic systems for medical diagnostic imaging applications. Compared to&#13;
other imaging modalities, patient contact is a significant source of variability unique&#13;
to ultrasound. Contact sensitive applications such as remote patient/neonatal monitoring,&#13;
tracking wound healing, and imaging of sensitive skin areas can significantly&#13;
benefit from a non-contact ultrasound system. Laser ultrasound (LUS) imaging offers&#13;
potential advancements over conventional ultrasound, especially in achieving highresolution&#13;
imaging of tissue structures and the elimination of liquid coupling mediums&#13;
and probe-to-body contact. The thesis presents an innovative approach to enhance&#13;
the performance of LUS signals in human tissue by utilizing hydrogels, hydrophilic&#13;
polymeric materials known for high-water content and biocompatibility, as a surface&#13;
treatment layer for ultrasound detection and generation. The system integrates&#13;
and synchronizes linear stage automation, transducer acoustic wave generation, laser&#13;
doppler vibrometry (LDV), and LabView integration. High speed data acquisition&#13;
(DAQ) through a dedicated Pico Technology setup streams digitized data directly to&#13;
the host PC. LDV measurements highlighted the crucial role of bead concentration&#13;
within hydrogels. Velocity amplitude measurements reflected an inverse relationship&#13;
with increasing bead concentrations, peaking at approximately 700 mm/s. However,&#13;
higher bead concentrations yielded better data accuracy and reduced noise, suggesting&#13;
an optimal range for bead concentration.A comparison of noise ranges across different&#13;
hydrogel bead concentrations highlighted improved data quality and precision for&#13;
concentrations exceeding 0.015 g/mL. Furthermore, laser-based measurements indicated&#13;
that hydrogel with a bead concentration of 0.02 g/mL provided consistent and&#13;
enhanced signal amplitude. The findings present a pivotal step towards optimizing&#13;
LUS for clinical applications, opening new doors in medical imaging and diagnostics.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Technoeconomic feasibility of decentralized&#13;
desalination in the Navajo Nation</title>
<link href="https://hdl.handle.net/1721.1/152809" rel="alternate"/>
<author>
<name>Brei, Melissa</name>
</author>
<id>https://hdl.handle.net/1721.1/152809</id>
<updated>2023-11-03T03:29:21Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Technoeconomic feasibility of decentralized&#13;
desalination in the Navajo Nation
Brei, Melissa
The Navajo Nation, located in the southwest United States, faces a significant water&#13;
stress issue, with approximately 30% of households lacking access to piped water. For&#13;
many, connection to a piped network is infeasible and decentralized solutions, like desalination,&#13;
have encountered barriers to adoption. This study evaluates the Navajo&#13;
Nation’s geography, environment, and infrastructure to justify decentralized desalination.&#13;
A diverse group of stakeholders were interviewed to gain comprehensive insights&#13;
into the underlying challenges and possible value-added solutions. Analyzing these&#13;
interviews revealed a cultural aversion to wastewater, a strong sensitivity to operating&#13;
costs, and two potential system sizes: home and community. With financial sustainability&#13;
being an important requirement for several stakeholders, a first-order economic&#13;
analysis of both system sizes was conducted. Home systems present strong potential&#13;
for economic viability but community systems struggle to compete in this region&#13;
due to low population density. Using the elucidated design requirements for home&#13;
systems, electrodialysis (ED) and reverse osmosis (RO) were evaluated for technical&#13;
feasibility. While RO systems, unlike ED, are commercially available at this scale,&#13;
RO wastes 50-80% of the feedwater while ED wastes &lt; 30%. Both technologies have&#13;
strong technical feasibility for this region and both will be field tested to understand&#13;
long-term maintenance requirements and user perception of wastewater.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scaling up a quantum register of dark electronic spins in diamond</title>
<link href="https://hdl.handle.net/1721.1/152806" rel="alternate"/>
<author>
<name>Ungar, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152806</id>
<updated>2023-11-20T15:09:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Scaling up a quantum register of dark electronic spins in diamond
Ungar, Alexander
Electronic spin defects in the environment of an optically-active spin can be used to increase the size and hence the performance of solid-state quantum registers, especially for applications in quantum metrology and quantum communication. Previous works on multi-qubit electronic-spin registers in the environment of a Nitrogen-Vacancy (NV) center in diamond have only included spins directly coupled to the NV. As this direct coupling is limited by the spin coherence time, it  significantly restricts the register's maximum attainable size. To address this problem, this thesis presents a scalable approach to map out and control a network of interacting environmental spins. We use this approach to characterize a spin network beyond the direct-coupling limit and exploit a weakly-coupled probe spin to mediate the transfer of spin polarization between the central NV and an environmental spin that is not directly coupled to it. We then demonstrate both detection and coherent control of this electronic spin outside the coherence limit of the central NV. Our work paves the way for engineering larger quantum spin registers with the potential to advance nanoscale sensing, enable correlated noise spectroscopy for error correction, and facilitate the realization of spin-chain quantum wires for quantum communication.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Approaching Novel Perovskites Photovoltaic Devices through Machine Learning and Interfacial Engineering</title>
<link href="https://hdl.handle.net/1721.1/152805" rel="alternate"/>
<author>
<name>Zhang, Ruiqi</name>
</author>
<id>https://hdl.handle.net/1721.1/152805</id>
<updated>2023-11-03T03:52:43Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Approaching Novel Perovskites Photovoltaic Devices through Machine Learning and Interfacial Engineering
Zhang, Ruiqi
Organic metal halide perovskites have shown plenty of extraordinary optoelectronic properties which make them good candidates for various photovoltaic applications [1-5]. The fascinating optoelectronic properties of perovskite largely take credit to their low exciton binding energy, strong light absorption coefficient, relatively long carrier diffusion length, and carrier recombination lifetime [6-9]. However, even with an increasing number of studies carried out, perovskite solar cell is still facing plenty of challenges towards commercialization. Two main challenges towards large-area commercialization include first the harsh fabrication environment and cost of large-area coating; and second the redundant fabrication process with a huge labor force impelled. In this thesis study, an intermedia thin film layer tris(4-carbazoyl-9ylphenyl)amine (TcTa) with a thickness of 3 nm is discovered in a large-area compatible perovskite solar cell structure ITO/SnO2/(MAFACs)1Pb(IBrCl)3/PV2000/TcTa/Au that reaches a power conversion efficiency above 14%. The TcTa intermediate film is compatible with substituting gold top electrodes and preventing sputter damage while maintaining a similar solar cell performance (etc. sputtered Ni). In addition, a machine learning algorithm is developed to predict the solar cell current-voltage properties only based on the film stack optical properties before the solar cell is fabricated. The algorithm is developed and tested based on the 3D/2D perovskite solar cell structure [10] with resulting in an average prediction regression loss below 5% and a best prediction accuracy above 99%. Multiple different machine learning algorithm is also carried out to analyze the prediction results and learning weights for the model.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated-Photonics Devices and Architectures for Advanced Cooling of Trapped Ions</title>
<link href="https://hdl.handle.net/1721.1/152804" rel="alternate"/>
<author>
<name>Hattori, Ashton</name>
</author>
<id>https://hdl.handle.net/1721.1/152804</id>
<updated>2023-11-03T03:03:54Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Integrated-Photonics Devices and Architectures for Advanced Cooling of Trapped Ions
Hattori, Ashton
Integrated-photonics-based architectures for trapped-ion systems offer a potential avenue for improved fidelity and addressability of ion arrays. Motional state cooling, a key optical function in trapped-ion systems, however, has been limited to Doppler and resolved-sideband cooling in integrated-photonics-based implementations. In contrast, polarization-gradient and electromagnetically-induced-transparency cooling can offer better cooling performance in multi-ion systems, but have not been demonstrated on an integrated-photonics platform. This thesis demonstrates key integrated-photonics devices and architectures to enable enhanced laser cooling of integrated trapped-ion systems.&#13;
&#13;
First, we develop the framework for two advanced trapped-ion cooling schemes, polarization-gradient and electromagnetically-induced-transparency cooling. Then, we present the design of key integrated devices enabling the proposed system architectures. First, we show the design and experimental demonstration of the first integrated polarization splitters and rotators at blue wavelengths. We develop compact and efficient designs for both a polarization splitter and rotator at a 422-nm wavelength, an important transition for 88Sr+ ions. These devices are fabricated in a 200-mm wafer-scale process and experimental results are demonstrated. Next, we present the design and experimental demonstration of the first pair of integrated TE- and TM-emitting gratings at a wavelength of 422 nm to enable polarization-diverse operations for integrated-photonics-based trapped-ion systems. The development of both the devices and architectures for advanced cooling schemes presented in this thesis paves the way for sophisticated integrated control for trapped-ion and neutral-atom systems.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nanomaterial-Enabled Out-of-Autoclave and Out-of-Oven Manufacturing of Fiber Reinforced Polymer Composites</title>
<link href="https://hdl.handle.net/1721.1/152800" rel="alternate"/>
<author>
<name>Serrano, Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/152800</id>
<updated>2023-11-03T03:04:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Nanomaterial-Enabled Out-of-Autoclave and Out-of-Oven Manufacturing of Fiber Reinforced Polymer Composites
Serrano, Steven
Fiber reinforced polymer composite materials have been a staple of the aerospace industry, integral to creating lightweight flight vehicles due to their high specific material properties. These materials often come in a prepreg form, where microfibers are pre-impregnated with a polymer matrix to form lamina that are stacked to form a composite laminate. These aerospace-grade composite structures generally require an autoclave cure, which uses both temperature and pressure to cure the thermoset polymer (or consolidate the thermoplastic polymer) in the prepreg and remove voids throughout the laminate. In this thesis, curing of autoclave-grade thermosetting prepregs using vacuum-bag only (VBO) processes are investigated and further developed through the employment of nanomaterials, both within the laminate itself and externally as a conductive heating mechanism. A preliminary void reduction study was conducted on the effects of placing different nanoporous networks (NPNs) in the interlaminar regions of a VBO manufactured quasi-isotropic laminate using autoclavegrade glass fiber reinforced polymer (GFRP) unidirectional prepreg. It was shown that vertically aligned carbon nanotubes (VA-CNTs), electrospun polymer nanofiber (EPN) veils, and polyimide (PI) aerogel thin films may each successfully evacuate voids via capillary-pressure enhanced polymer flow, as the laminate was void-free. A subsequent study placing PI aerogel NPN in each interlaminar region was shown to successfully create a void-free GFRP laminate on a hot plate using VBO manufacturing. Autoclave woven CFRP prepreg laminates were also manufactured using the same VBO with NPN technique, with PI aerogel in each interlaminar region. Laminates were shown to have minimal void content (&lt; 0.03 vol%) using an advantageously thinner aerogel film than previous work. A previously studied out-of-oven (OoO) curing process using a carbon nanotube (CNT) thin film heating element was modeled using ANSYS Composite Cure Simulation (ACCS) to predict the temperature and degree of cure (DoC) of CFRP laminates using cure kinetics equations and the finite element method. The Limited-memory Broyden-Fletcher-Goldfarb-Shanno with Bound constraints (L-BFGS-B) algorithm was implemented to optimize the cure cycle with respect to time and DoC constraints. Two optimized cure cycles were revealed via the optimization scheme, showing significant (60% to 65%) reductions in manufacturing time. A third accelerated-cure cycle did not use the optimization scheme, but rather utilized an empirical estimation of resin rheology, time history of temperature, and DoC to obtain a cure cycle that had comparable resin flow to that of the manufacturer recommended cure cycle (MRCC) per a defined flow metric. Laminates utilizing the three accelerated cures, the MRCC cure, and a cure with an extended first hold were all modeled in ANSYS and manufactured with a CNT heater OoO set-up and an EPN NPN. The model was found to overestimate the DoC of the manufactured 152 mm x 152 mm x 2 mm (16 ply) laminates by ∼5% on average. The accelerated-cure laminates were shown to have a relatively high void content, indicating that additional considerations are necessary to successfully accelerate the VBO CFRP cure cycle. However, the laminate cured with an extended first hold, as well as the MRCC laminate, were found to have minimal void content (0.02 vol% and 0.08 vol%, respectively). Furthermore, the accelerated-cure laminate with a second hold of 200°C for 36.5 minutes was found to yield a nominal DoC (90.5%) and a comparable glass transition temperature (&#119879; subscript &#119892;) to that of the MRCC cured laminate. Together, the results found in this work show that nanomaterials (i.e. NPNs and CNT heating elements) enable the VBO manufacturing of several types of autoclave prepregs and improve manufacturing throughput via cure cycle modifications that can allow significant acceleration of the overall cure cycle.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Virtual Sheriff Sales: Contested Narratives on Tax Sales in Philadelphia, PA</title>
<link href="https://hdl.handle.net/1721.1/152798" rel="alternate"/>
<author>
<name>Mana, Soad</name>
</author>
<id>https://hdl.handle.net/1721.1/152798</id>
<updated>2023-11-03T03:44:27Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Virtual Sheriff Sales: Contested Narratives on Tax Sales in Philadelphia, PA
Mana, Soad
This thesis describes a qualitative overview of tax foreclosure auctions in Philadelphia, PA, otherwise colloquially known as sheriff sales. As the rate of displacement of long-term residents has increased in the past few years, greater attention has been called upon official city processes of land acquisition and disposition. By analyzing city council meeting transcripts, reports, news articles, and interviews with key stakeholders in the city, I use the emerging debate on sheriff sales’ permanent shift to virtual in 2021 as a lens to interrogate how various stakeholders view tax foreclosure sales overall. Through this qualitative analysis, I identify five main factors that outline the impact of the increasing privatization of a city sanctioned tax enforcement and collection tool: reduced accountability, transparency, accessibility, a disproportionate social impact on marginalized residents, and the discounting of vacant land. Exchanges about tax sales have been grounded in a much larger conversation in the city about neighborhood change and displacement. As homes, community gardens, and gathering spaces have been sold in sheriff sales, many community members have questioned its impacts on their neighborhoods and challenged the city’s conceptualization of tax delinquent land. Official categorizations of land as abandoned by the City contrast with how residents have materially cared for the land and staked claims to it. Recognizing land beyond property involves understanding land as a site for people's experiences, aspirations, memories, and visions for different futures. Understanding the land as such calls for a reexamination of sheriff sales as a dominant tool used by the City to collect delinquent taxes and activate land. As displacement in Philadelphia intensifies, the land question is once again gaining urgency.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>IlluSonnet: Using Generative AI to Create Illustrations for Sonnets</title>
<link href="https://hdl.handle.net/1721.1/152797" rel="alternate"/>
<author>
<name>Chen, Tiffany</name>
</author>
<id>https://hdl.handle.net/1721.1/152797</id>
<updated>2023-11-03T03:01:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">IlluSonnet: Using Generative AI to Create Illustrations for Sonnets
Chen, Tiffany
Poetry evokes imagery, and writers and readers alike desire to translate the artful wordplay to a beautiful image. To facilitate this process, we built IlluSonnet, a system that creates illustrations for poetry using text-to-image generative AI models. IlluSonnet works by labelling keywords, emotional qualities, and most related artistic style for the given sonnet before prompting DALL-E for an image. To evaluate IlluSonnet, we both ran a user study to assess the quality of the output images as well as the overall interface. Our study indicates that IlluSonnet helped users generate images that illustrated the sonnets well and that the process of creating and seeing imagery alongside the poem helped users understand the sonnets in a new light. We conclude by discussing how IlluSonnet can be used to further facilitate a deeper connection between both art and poetry.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>System Dynamic Analysis to Evaluate the Socio-Economic Impact of the Energy Transition of Singapore to Achieve Net Zero Emissions by 2050</title>
<link href="https://hdl.handle.net/1721.1/152796" rel="alternate"/>
<author>
<name>Lum, Mun Kit Kenny</name>
</author>
<id>https://hdl.handle.net/1721.1/152796</id>
<updated>2023-11-03T03:02:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">System Dynamic Analysis to Evaluate the Socio-Economic Impact of the Energy Transition of Singapore to Achieve Net Zero Emissions by 2050
Lum, Mun Kit Kenny
Singapore has pledged to achieve net zero carbon emissions by 2050. However, due to the country's limited land and lack of natural resources, the transition to net zero emission is a challenging journey. Singapore has chartered an energy transition plan to help the country navigate into a lower carbon footprint future but the plan focuses mainly on the technological challenges that the country needs to overcome to achieve the goal. A System Dynamic model that captures the economy, energy &amp; GHG emissions, and labor market of Singapore was developed to help understand the potential socio-economic challenges that could arise from the energy transition. The results from the model suggest that energy transition needs to be managed through a multi-pronged approach of not just technological changes but also managing efficiencies improvement in labor and energy use as one key issue relates to managing the availability of the skilled labor provided by the local workforce versus increasing the foreign worker ratio in the workforce, especially when new technologies are employed for the energy transition. To combat this challenge, Singapore can consider adopting policies implemented by other countries to improve energy efficiency and labor productivity.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Are Fact-checks Effective Even for Those Who Distrust Fact-checkers?</title>
<link href="https://hdl.handle.net/1721.1/152791" rel="alternate"/>
<author>
<name>Martel, Cameron</name>
</author>
<id>https://hdl.handle.net/1721.1/152791</id>
<updated>2023-11-03T03:07:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Are Fact-checks Effective Even for Those Who Distrust Fact-checkers?
Martel, Cameron
There is growing concern over the spread of misinformation. One of the most widely adopted interventions by online platforms for addressing false stories is applying fact-checker informed ‘warning labels’ over misleading posts. Despite a rich literature on corrections and approaches for debunking misinformation, there is comparatively less work examining evidence on the effectiveness of warning labels. Do warning labels effectively reduce belief and spread of misinformation? Chapter I reviews research aimed at answering this important question, and further investigates factors contributing to warning label efficacy: features of the labels themselves, features of the underlying labeled content, and features of individuals viewing labeled content. Overall, existing research suggests that warning labels typically produce consistent, beneficial effects – though the size of these effects is moderated by a multitude of relevant factors. We highlight features that best contribute to warning label efficacy and discuss potential limitations and implications of labelling policies for addressing online misinformation.&#13;
&#13;
As reviewed in Chapter I, prior work suggests that warning labels are effective at reducing the belief and spread of false content on average. However, there is concern about growing distrust of fact-checkers, particularly among those on the political right. In Chapter II we investigate whether trust in fact-checkers moderates the efficacy of warning labels. Are warning labels from fact- checkers effective even for those who say they distrust fact-checkers? In a correlational study (N=1,000), we first establish and validate an adapted trust in fact-checkers measure. We also explore the relationship between trust in fact-checkers and partisanship and replicate prior findings of more Republican-favoring participants reporting less trust in fact-checkers. We also extend upon such work by providing evidence that skill-based traits like procedural news knowledge and analytic thinking exacerbate this partisan asymmetry. Next, we conduct meta-analyses across 21 experiments (N=15,983) in which participants evaluated either their perceived accuracy or sharing intentions of news articles. Participants either received no warning labels, or warning labels on a high proportion of false news articles encountered. We find that warning labels were on average effective at reducing belief and sharing of false headlines. Next, we find that trust in fact-checkers moderates warning efficacy on accuracy, but do not find evidence of moderation on sharing intentions. Importantly, despite this moderation, our results suggest that warning labels significantly reduce belief and sharing of false headlines even for those most distrusting of fact- checkers.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Keeping New Orleans Afloat: What can be done to ensure another hurricane the size of Katrina will not destroy the entire city?</title>
<link href="https://hdl.handle.net/1721.1/152790" rel="alternate"/>
<author>
<name>Brown, Daelin</name>
</author>
<id>https://hdl.handle.net/1721.1/152790</id>
<updated>2023-11-03T03:24:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Keeping New Orleans Afloat: What can be done to ensure another hurricane the size of Katrina will not destroy the entire city?
Brown, Daelin
On August 29, 2005, Hurricane Katrina, a Category 3 storm, struck New Orleans. The location of New Orleans makes the city extremely vulnerable to massive storm surges during hurricane season, and the entire city was relying on flood management for their safety. They had a Hurricane and Storm Damage Risk Reduction System (HSDRRS) in place, but the system was not efficient enough for the strength of Katrina’s 28-foot storm surge and 55-foot waves. After 50 major levee breaches, New Orleans looked like residents had built a beach in their backyards, with several feet of water breaking right through the levees. The Gulf Coast resembled the largest wave pool in the world, with the 55-foot waves damaging 34 pumping stations and 169 miles of protective structures in the regional HSDRRS. All of these failures caused 80 percent of New Orleans, along with several surrounding neighborhoods, to be underwater for weeks.&#13;
&#13;
Not only were there 1,392 estimated fatalities, but 800,000 housing units were also destroyed or damaged by Katrina, leaving at least 800,000 people homeless. The total damage of Katrina amounted to over $160 billion, making it one of the largest natural disasters in the history of the U.S., and the third deadliest storm in U.S. history. The catastrophe posed two questions: what had gone so wrong for this American city to be destroyed and what needed to be done to make sure that this amount of devastation would not happen the next time a storm hit New Orleans?
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performing Distance Queries on Social Networks in Sublinear Time</title>
<link href="https://hdl.handle.net/1721.1/152786" rel="alternate"/>
<author>
<name>Kōshima, Nadia</name>
</author>
<id>https://hdl.handle.net/1721.1/152786</id>
<updated>2023-11-30T12:25:40Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Performing Distance Queries on Social Networks in Sublinear Time
Kōshima, Nadia
Shortest path computation is an important base task in many applications. While there have been improvements to the shortest path algorithms, all require preprocessing the entirety of the graph, creating inefficiencies, especially when applied to large social networks. Considering that social networks often appear with power law distributions, we present the question of utilizing this insight for sublinearity. We thus propose Wormhole, an algorithm that can perform reasonably accurate shortest distance estimations in sublinear runtime. On large graphs, scaling up to billions of edges, Wormhole empirically demonstrates the ability to provide reasonable accuracy over 10,000 distance queries while only seeing &#119874;( √ &#119899;) vertices. This shows an improvement over the baseline method of Bi-directional BFS, which has shown similar results on the scale of &#119874;(&#119899;).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hyperfine Interaction of the Group IV Color Centers</title>
<link href="https://hdl.handle.net/1721.1/152785" rel="alternate"/>
<author>
<name>Harris, Isaac B. W.</name>
</author>
<id>https://hdl.handle.net/1721.1/152785</id>
<updated>2023-11-03T04:04:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Hyperfine Interaction of the Group IV Color Centers
Harris, Isaac B. W.
The group IV-negative color centers (SiV⁻, GeV⁻, SnV⁻) are one of the leading candidates for spin-photon interfaces for use in quantum information technologies. They feature highly coherent optical transitions, as well as native electron and nuclear spins that can be used as quantum memories. While the optical and electronic properties of these defects have been studied extensively in previous works, a detailed theory of the hyperfine coupling to the nuclear spin is lacking. This work presents a complete theoretical model of the hyperfine coupling to the intrinsic dopant nucleus in the group IV-negative color centers, complete with ab-initio theoretical predictions of the hyperfine coupling strength, and supported by experimental observation in an isotopically engineered sample. The theoretical model explains the observed hyperfine features well, providing a foundation for future work to use the intrinsic nuclear spin in quantum protocols.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning for Modeling and Control of a Packaging&#13;
Manufacturing Process</title>
<link href="https://hdl.handle.net/1721.1/152782" rel="alternate"/>
<author>
<name>Deshpande, Aniruddha</name>
</author>
<id>https://hdl.handle.net/1721.1/152782</id>
<updated>2023-11-03T03:52:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Machine Learning for Modeling and Control of a Packaging&#13;
Manufacturing Process
Deshpande, Aniruddha
Process control is a key component of industrial automation. Irrespective of the specific product being manufactured, there is always a need for a controller to decide on specific inputs to the system such that the process gives the desired output, which may be product specifications, quality requirements. Many modern industrial processes still use classical PID control, which is quite effective and easy to implement on Programmable Logic Controllers (PLC). However, this control strategy does not account for the dynamics of the process while developing a control policy, thereby fundamentally limiting its performance capabilities. This means that intervention by operators is required often to ensure smooth functioning of the process, and even then large down times which lead to wastage of material and money are all too common.&#13;
&#13;
With the advent of Industry 4.0 however, more and more manufacturing processes are being fitted with a large number of sensors, cameras; which allow us to collect process data at a scale that was not possible before. Modern machine learning methods have become extremely capable of transforming big data into accurate models. This opens up the opportunity of developing sophisticated models or Digital Twins of manufacturing processes which can then be used to develop more advanced control strategies that would improve on the status quo of heuristically tuned PID control. Such models can be used to explicitly derive control strategies or even be used in simulation to learn improved control.&#13;
&#13;
In this thesis we tackle this modeling and control problem for a packaging manufacturing process. We developed a model for the process that is based on a combination of physics based roll to roll models fine-tuned with process data as well as Neural network based NARX models and validate this combined plant model. We then use this model to test out various control strategies in simulation, ranging from classical PID and optimal linear control as well as use these models to further fine-tune these controllers for better performance.&#13;
&#13;
While such improved data driven controller development strategies exist, adoption is still limited. In the final section of this thesis we also explore how this digital transformation is taking place in the wider manufacturing ecosystem. We review key literature, industry surveys and policy documents and synthesize a view on the current state of adoption of AI in manufacturing, its potential impacts, as well as the big hurdles in adoption. We also examine the kinds of policies in place in the United States to tackle this.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Small modular reactor technology for industrial heat and power: selection techniques and implementation strategies for real-world use cases using systems-based approaches.</title>
<link href="https://hdl.handle.net/1721.1/152779" rel="alternate"/>
<author>
<name>Coffey, Clay Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/152779</id>
<updated>2023-11-03T03:02:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Small modular reactor technology for industrial heat and power: selection techniques and implementation strategies for real-world use cases using systems-based approaches.
Coffey, Clay Allen
The potential of Small Modular Reactors (SMR) has been recognized as an emerging technology that could play a key role in climate change mitigation and achieving net zero 2050 climate goals. The technology behind SMRs is known and proven in many cases but has yet to be deployed commercially.&#13;
&#13;
SMR technological innovation is advancing in several countries around the world, from Gen-III+ light water SMRs, to Gen-IV SMRs and micro-reactors. All at various stages of development, from an early conceptual phase to operational deployment and commercialization. These SMRs are also being developed in multiple configurations, being land, or marine based and coming in single or multimodule (and scalable configurations) with a wide range of heat and power generation capabilities.&#13;
&#13;
In the U.S., several of these technologies are supported by recent funding from legislation that supports a variety of energy policies to advance decarbonization goals in electricity and hard to abate industries, where the potential for renewables is limited. SMRs have attributes related to safety, flexibility, footprint, and waste management that give them opportunities not seen by traditional nuclear.&#13;
&#13;
In North America and Europe there are over 15 SMR designers working on designs ranging from 4 MWe to nearly 350 MWe (500 MWe with thermal storage) and generating heat in a range of 300-750 °C. These SMRs seek to fill a variety of industrial use cases ranging from district heating to fossil fuel replacement for on-grid power, to replacement of fossil fuel cogeneration with high heat requirements.&#13;
&#13;
This thesis addresses the overarching question of how to select which SMR designer and technology is most likely to be successful for various industrial use cases by answering the following sub-questions:&#13;
&#13;
1. What First-of-a-Kind (FOAK) SMR designs currently in development are most likely to be deployed and commercialized in the United States over the next decade?&#13;
&#13;
2. Of the many hard to abate sectors, what are the potential industrial use cases for SMRs and what is the cost and competitiveness of SMRs in these areas relative to existing energy systems?&#13;
&#13;
3. Based on the findings of 1. and 2. above, which SMR designs are best suited for the identified industrial uses cases in the United States?&#13;
&#13;
4. Can these best suited SMR designs be competitive with existing technologies (footprint, siting, capital cost, levelized cost of electricity (LCOE)?
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Macroscale Defect Detection in Semiconductor Manufacturing using Automated Inspection with Convolutional Neural Networks</title>
<link href="https://hdl.handle.net/1721.1/152778" rel="alternate"/>
<author>
<name>Sampson, Jonathan A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152778</id>
<updated>2023-11-03T03:31:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Improving Macroscale Defect Detection in Semiconductor Manufacturing using Automated Inspection with Convolutional Neural Networks
Sampson, Jonathan A.
The work detailed in this thesis explores four distinct pathways of improving wafer macroscale defect detection from both tool-centric and operator-centric perspectives. The primary tool-centric improvement detailed in this work is the implementation of machine learning-enhanced defect detection models to provide recommendations of defective wafers to review operators. This work features the theory, data acquisition and processing, and training steps for three models designed to catch three different defect types. Models are trained on spin-on-glass (SOG) defects, defects around the perimeter of a wafer, and various other defects occurring in the central area of a wafer. SOG defects are the primary focus of this work, also occurring in the central area of a wafer, though much smaller than the defects present in the central defect detection model. After training, the SOG defect detection model achieved an area under curve (AUC) of 0.927 for testing data out of its training data set distribution. The edge model and general central model achieved AUC values of 0.906 and 0.909, respectively, also on out of distribution testing data. These models, and the tools developed for data labeling, can be adopted for automated defect detection, and efficient data tagging for machine learning applications.&#13;
&#13;
The other improvement pathways featured in this work involve additional tool-centric improvements of examining and performing corrective action on current wafer inspection tools, and evaluating the potential for in-line wafer inspection during processing. An operator-centric improvement is also detailed, describing the feature, operational, and productivity enhancements associated with the development of a new software interface for wafer image review.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Sensing of Ice Dynamics in the Beaufort Sea</title>
<link href="https://hdl.handle.net/1721.1/152777" rel="alternate"/>
<author>
<name>Flores, Matthew A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152777</id>
<updated>2023-11-03T03:25:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Remote Sensing of Ice Dynamics in the Beaufort Sea
Flores, Matthew A.
Arctic summer sea ice extent has undergone dramatic declines over the past several decades, particularly in the Beaufort Sea.  The comprehension of the sea ice decline requires an understanding of the annual sea ice retreat during the summer melt season.  While there are observations of the seasonal sea ice retreat, there is no accurate data on the evolution of sea ice thickness during the melt season.  This thesis presents an analysis of sea ice in the Beaufort Sea using available sea ice freeboard data taken from NASA’s Ice, Cloud, and Elevation Satellite-2 (ICESat-2) mission.  Through tracking bi-weekly changes in freeboard for Lagrangian tracked parcels of sea ice, the patterns of sea ice retreat are examined from 01 June – 30 September for 2020-2022.  This method provides realistic patterns of sea ice thinning through mid-summer, with the most pronounced thinning occurring in the eastern Beaufort Sea. By September, freeboard changes are difficult to detect, with some subregions showing an increase in freeboard (thickening).  The increase in freeboard likely reflects uncertainty due to changes in the distribution of ice types, particularly preferential disappearance of thinner ice but an also reduced rate of thinning.  Although these results are preliminary, suggest that ICESat-2 can be used to track seasonal changes during the melt season to help identify trends and drivers of sea ice retreat. Further work is necessary to improve these results, especially in understanding how different ice types evolve.  Other remote sensing data or in-situ observations are needed to reduce the uncertainty in the subregional estimates of ice melt.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Gaussian Noise in Superconducting Circuits</title>
<link href="https://hdl.handle.net/1721.1/152776" rel="alternate"/>
<author>
<name>McCourt, Trevor Johnathan</name>
</author>
<id>https://hdl.handle.net/1721.1/152776</id>
<updated>2023-11-03T03:40:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Non-Gaussian Noise in Superconducting Circuits
McCourt, Trevor Johnathan
In stark contrast to man-made systems, living things embrace noise and use it to further their functionality. It is therefore not surprising that some lifeforms couple strongly to environmental fluctuations, and can leverage non-Gaussian noise to gain a competitive edge over their peers. In this thesis, I study non-Gaussian fluctuations using a system of Transmon qubits as ultra-sensitive quantum sensors and make the first clear experimental observation of non-Gaussian noise in a qubit system. I achieve this using multi-qubit dynamical decoupling sequences that characterize noise during two-qubit gates when the system is coupled strongly to flux fluctuations. This noise is qualitatively different from the well-studied noise that leads to single qubit dephasing; it simultaneously affects the two qubits, inducing fluctuations in their entangling parameter. In our superconducting system, the experimentally observed noise is consistent with random telegraph noise and leads to the stepwise decay of signals. With this clear characterization of non-Gaussian noise in hand, we have paved the way for a new class of lifelike engineered systems that harness noise to their benefit.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Matching of Users and Creators on Social Media Platforms</title>
<link href="https://hdl.handle.net/1721.1/152774" rel="alternate"/>
<author>
<name>Lyu, Liang</name>
</author>
<id>https://hdl.handle.net/1721.1/152774</id>
<updated>2023-11-03T03:41:45Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Dynamic Matching of Users and Creators on Social Media Platforms
Lyu, Liang
Social media platforms are two-sided markets bridging content creators and users. Existing literature on content recommendation algorithms used by platforms often focuses on user preferences and decisions, and does not jointly address creator incentives. We propose a model of content recommendation that explicitly focuses on dynamic user-content matching, with the novel contribution that both users and creators may leave the platform if they feel dissatisfied. In our model, each player decides to stay or leave at each time step based on utilities derived from the current match: users based on their similarities with the recommended content, and creators based on their audience size. We show that a user-centric greedy algorithm that only maximizes immediate engagement can result in poor total engagement in the long run, even if users and creators are randomly generated from prior distributions, but explicitly maximizing long-term engagement is NP-hard. Finally, we present new practical algorithms with provable guarantees and good empirical performance.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-Thinking Urban Retail: The Design and Planning of “Dark Stores” and Public Spaces Case Study: Manhattan, New York</title>
<link href="https://hdl.handle.net/1721.1/152773" rel="alternate"/>
<author>
<name>Halim, Juanita</name>
</author>
<id>https://hdl.handle.net/1721.1/152773</id>
<updated>2023-11-03T03:21:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Re-Thinking Urban Retail: The Design and Planning of “Dark Stores” and Public Spaces Case Study: Manhattan, New York
Halim, Juanita
The retail industry has transformed into various formats due to the fast-paced social and sharing economy changes driven by technological advancements. The recent concept, grocery “dark stores” (retail facilities that are designed for online order fulfillment mostly located in urban areas), is expected to stay as e-commerce and omni-channel operators view them as cost-effective means of delivering quick services to customers. City officials are currently discussing the potential advantages and drawbacks of “dark stores” which could affect changes for street livability in the absence of retail storefronts. Should cities ban “dark stores” that compete with traditional brick-and-mortar retailers?&#13;
&#13;
This thesis analyzes the proliferation of online grocery shopping and how “platform urbanism” (Sadowski, 2020), a novel set of digitally-enabled socio-technological assemblages rooted in the urban affects the spatial distribution of grocery “dark stores” activities by understanding their location and target customers. By using spatial analysis and interviews, this thesis tries to answer three questions: what is  the role of grocery “dark stores” in cities?; where are they located?; and what are their impacts on the urban fabric? It uses NYC (Manhattan) 2021 decennial census and retail food stores data collected in 2022 and 2023 to provide some insights to these questions. The result shows that 1) The location of grocery “dark stores” are mostly located in neighborhood areas with high retail food stores and facility concentration 2) Grocery “dark stores” in Manhattan are located mostly in the Commercial and Manufacturing districts 3) Despite the rise of grocery “dark stores,” high funding from Venture Capitalists, and their promise of convenience to customers, in mid2022, grocery “dark stores” in Manhattan faced exits due to dwindling investor funding, competitive market landscape, and political environment driven by Russia-backed Venture Capitalists.&#13;
&#13;
In the digital era, strategies to digitally transform the city need to consider the implications of different types of retail formats and stakeholders involved. There is a need for urban policy and regulation to address how new retail platforms can reshape the nexus between businesses location, their design and function and the public. As this thesis shows, there is more urgency to do so as new form of retail and businesses are emerging as a result the tech-enabled digital economy and urban new urban infrastructure.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation Ecosystems in Geographically-Remote and Resource-Limited Regions with Indigenous Populations and considering Ancestral Science, Knowledge, and Practices: Intentional Development in the Pacific Islands of Hawaiʻi, Fiji, and New Zealand</title>
<link href="https://hdl.handle.net/1721.1/152772" rel="alternate"/>
<author>
<name>Nihipali, Holly Christine Greenberg</name>
</author>
<id>https://hdl.handle.net/1721.1/152772</id>
<updated>2023-11-03T03:53:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Innovation Ecosystems in Geographically-Remote and Resource-Limited Regions with Indigenous Populations and considering Ancestral Science, Knowledge, and Practices: Intentional Development in the Pacific Islands of Hawaiʻi, Fiji, and New Zealand
Nihipali, Holly Christine Greenberg
Innovation ecosystems provide a way to transform and diversify a regional economy. Much of the existing research focuses on mature economies in regions with strong foundational insti- tutions and natural resources. The research herein uses the MIT Three-S (system, stakeholder, strategy) Framework to characterize regional ecosystems that are geographically-remote and resource-limited, specifically the Hawaiian Islands, Fiji, and New Zealand. Using measure- ments of entrepreneurial and innovation capacities and, where possible, interviews of local stakeholders, opportunities and challenges for these regional innovation ecosystems are iden- tified. Attention is given to the counterpoint Indigenous peoples bring to a regional innovation ecosystem. Strategies are suggested for leveraging comparative advantages. Further research and testing is recommended to trial the effectiveness of innovation and entrepreneurship to drive the transformation of tourist economies towards diversification and becoming knowl- edge and digital economies.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Human Code Exchange (HCX) : A Community-Value-Driven Framework for Data Governance in Humanitarian Crises</title>
<link href="https://hdl.handle.net/1721.1/152771" rel="alternate"/>
<author>
<name>Vibbi, Leonard Francis</name>
</author>
<id>https://hdl.handle.net/1721.1/152771</id>
<updated>2023-11-03T03:42:00Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Human Code Exchange (HCX) : A Community-Value-Driven Framework for Data Governance in Humanitarian Crises
Vibbi, Leonard Francis
In this study, we examine data collection methods utilized in local communities during humanitarian crises, with a focus on the Sierra Leone COVID-19 scenario. We assess how widely-used data ethics principles in humanitarian data initiatives align with community values. We define these values as encompassing shared principles, virtues, and a collective understanding of what holds significance and meaning to affected communities[1].  &#13;
&#13;
Interviews conducted in Freetown communities allowed us to identify common themes across community principles and norms [values] toward data collection activities. Identified principles held by communities were subsequently contrasted with how data collection activities guided by established data ethics guidelines in humanitarian settings were carried out in target communities.&#13;
&#13;
Our findings commend the general adherence to ethical benchmarks, yet spotlight notable gaps that call for strategies more attuned to community shared principles and understanding. To address this, we present the "Human Code Exchange" (HCX) ethical data governance framework. HCX promotes participatory data collection, weaving in community values and experiences, thereby ensuring a balanced exchange between data collection activities and the community, and reducing practices that are not in tune with community values. With its core focus on the community, HCX aligns humanitarian data initiatives with the intrinsic values of communities, particularly in the regions of the global south. Our work lays the foundation for a refined data governance framework that places emphasis on ethical data collection in vulnerable communities.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Neural Feature Fields for Language-Guided Robot Manipulation</title>
<link href="https://hdl.handle.net/1721.1/152770" rel="alternate"/>
<author>
<name>Shen, William</name>
</author>
<id>https://hdl.handle.net/1721.1/152770</id>
<updated>2023-11-03T03:56:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Neural Feature Fields for Language-Guided Robot Manipulation
Shen, William
Self-supervised and language-supervised image models contain rich knowledge of the world that is important for generalization. Many robotic tasks, however, require a detailed understanding of 3D geometry, which is often lacking in 2D image features. This work bridges this 2D-to-3D gap for robotic manipulation by leveraging distilled feature fields to combine accurate 3D geometry with rich semantics from 2D foundation models. We present a few-shot learning method for 6-DOF grasping and placing that harnesses these strong spatial and semantic priors to achieve in-the-wild generalization to unseen objects. Using features distilled from a vision-language model, CLIP, we present a way to designate novel objects for manipulation via free-text natural language, and demonstrate its ability to generalize to unseen expressions and novel categories of objects.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Ecosystem Reboot : How scientists are building an inside-out Noah’s Ark for Florida’s vanished coral reefs</title>
<link href="https://hdl.handle.net/1721.1/152769" rel="alternate"/>
<author>
<name>Guy, Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/152769</id>
<updated>2023-11-03T03:47:54Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Ecosystem Reboot : How scientists are building an inside-out Noah’s Ark for Florida’s vanished coral reefs
Guy, Allison
In Florida, a deadly marine plague called stony coral tissue loss disease has inspired an unprecedented conservation plan: to rescue affected corals from the wild, and keep them alive in captivity, indefinitely. The idea was to make a Noah’s Ark turned inside out, evacuating corals from an inhospitable ocean, and raising, breeding and propagating them on land, with the quixotic hope that the reef can one day be rebooted from its backup copy. To do so, Florida’s coral community would need to collect thousands of corals, find places to warehouse their charges, and figure out ways to grow big, genetically diverse captive populations. And with stony coral tissue loss spreading swiftly up and down the state’s coast, they needed to act fast.&#13;
&#13;
This may be the most audacious conservation plan ever attempted — not just to save a species here and there, but to rescue the basis of an entire ecosystem, and to keep it alive through everything the future has in store. And where Florida’s beleaguered reefs go, the rest of the world will follow. Sooner or later, but most likely sooner, corals everywhere will be in need of their own inside-out arks, ferrying them towards some hoped-for future. Improbable as it seems, it just might work.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Shared Equity Homeownership in Korea: Analysis of the First Public Programs</title>
<link href="https://hdl.handle.net/1721.1/152768" rel="alternate"/>
<author>
<name>Park, Joon Tae</name>
</author>
<id>https://hdl.handle.net/1721.1/152768</id>
<updated>2023-11-03T03:15:41Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Shared Equity Homeownership in Korea: Analysis of the First Public Programs
Park, Joon Tae
Korea's public housing policy has reached a turning point, with an emphasis on alternative housing tenure types. Based on the notion of intermediate housing, three homeownership programs—land-lease housing for sale, profit-sharing housing for sale, and accumulated equity housing for sale—have been introduced. The high competition rates shown in these recent projects have proven the demand for these new intermediate or transitional homeownership programs. However, to avoid further trial and error, there is a need for rich discussions on what should be the founding principles and methods of implementation of these new homeownership programs.&#13;
&#13;
This study analyzes Korea's new homeownership programs based on the shared equity homeownership (SEH) models. To provide grounds for the evaluation, multiple literature and statistical data were explored. In turn, the principles and methodologies of the SEH models were derived, and the three homeownership programs were explained including their history and individual projects. As a result of the analysis, it was difficult to conclude that the three homeownership programs have adopted the principles and methodologies of the SEH models. To sustain the supply of affordable housing and to improve the lives of the homeowners who live within them, lessons from the SEH models should be taken into account.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Circadian and Multi-day Rhythms in Generalized Tonic-Clonic Seizure: A Probabilistic Approach</title>
<link href="https://hdl.handle.net/1721.1/152767" rel="alternate"/>
<author>
<name>Zhang, Boyu</name>
</author>
<id>https://hdl.handle.net/1721.1/152767</id>
<updated>2023-11-03T03:29:57Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Circadian and Multi-day Rhythms in Generalized Tonic-Clonic Seizure: A Probabilistic Approach
Zhang, Boyu
Epilepsy is a chronic neurological disorder characterized by recurrent seizures that affect more than 50 million people worldwide, representing approximately 0.6\% of the global population. This condition poses significant public health challenges, with a heightened risk of premature mortality. Underdiagnosis and undertreatment remain pervasive, particularly in low- and middle-income countries.&#13;
&#13;
Studies have discovered that seizure occurrences are phase-locking to subject-specific circadian and multi-day rhythms in human physiological signals. Also, various types of epilepsy have distinctive timing patterns with respect to sleep-wake cycles. However, it remains inconclusive how sleep parameters, non-invasive ambulatory physiological signals, and seizure occurrences are quantitatively related.  &#13;
&#13;
We first conduct an observational study on the association between sleep parameters, including duration, efficiency, fragmentation, and regularity, and generalized tonic-clonic seizure (GTCS) occurrences on the next day. We then conduct retrospective analyses of GTCS events phase-locking to rhythms in wrist electrodermal activity (EDA), validating previous claims. Ambulatory sleep-wake cycles and EDA recorded by smart wristbands from more than 1,000 patients diagnosed with GTCS are analyzed. GTCS events are detected by an FDA-cleared algorithm on the wristband.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Addressing Inefficiencies and Reflecting the Desires of Low-Income Housing Stakeholders: Recommendations to the Department of Housing and Urban Development to Deploy a Simplified, Developer-Driven Affirmative Fair Housing Marketing Plan Filing Process, as well as Review Proposals for Adaptive Policy Mechanisms</title>
<link href="https://hdl.handle.net/1721.1/152762" rel="alternate"/>
<author>
<name>Ananthabhotla, Bhavani</name>
</author>
<id>https://hdl.handle.net/1721.1/152762</id>
<updated>2023-11-03T03:51:28Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Addressing Inefficiencies and Reflecting the Desires of Low-Income Housing Stakeholders: Recommendations to the Department of Housing and Urban Development to Deploy a Simplified, Developer-Driven Affirmative Fair Housing Marketing Plan Filing Process, as well as Review Proposals for Adaptive Policy Mechanisms
Ananthabhotla, Bhavani
The Affirmative Fair Housing Marketing Plan (AFHMP) is a set of regulations passed by the U.S. Department of Housing and Urban Development (HUD) to govern the sharing of information about applications for low-income rental housing in accordance to the Fair Housing Act. Collaborators of this work at Camfield Estates, a low-income housing development in Boston, MA, communicated concerns over the regulations’ efficacy as well as desires for increased autonomy in the process of tenant selection and application marketing. The purpose of the research conducted was to describe, using social science and statistical methods, the limitations of the AFHMP regulations that are pertinent to low-income developments, to amplify any voiced concerns of Camfield Estates that may also help other low-income developments, and offer suggestions for improvements for the AFHMP regulations to align with their original goal.&#13;
&#13;
Qualitative interviews of low-income housing developers, development residents and staff, and HUD New England Compliance staff to identify the following limitations of the AFHMP which prevent effective enforcement of fair housing goals: (1) that there is significant administrative burden, for both filers and HUD staff, in maintaining and checking for policy compliance, (2) that guidelines for when to file updates were underdefined, (3) that guidelines for how to conduct the analysis to determine groups least likely to apply to a property were underdefined, and (4) that both stakeholders at low-income developments and HUD New England Compliance demonstrated interest in extending affirmative marketing to improve outcomes for those with intersectional identities, such as to address the difficulties of accessing housing while being single, male, and Black. A quantitative analysis of AFHMP, resident, and census data for Camfield Estates was conducted to study the first, second, and third concern in context. &#13;
&#13;
Recommendations for immediate changes that would respect Camfield Estates’ concerns of autonomy and wouldn’t significantly increase cost of administrative burden for HUD are made, including: (1) that the AFHMP form be simplified to reduce administrative burden, to reduce room for error in the analysis of groups least likely to apply to the development, and to reduce barriers to updating marketing strategy more frequently if needed, (2) that greater flexibility should be allowed in determining affirmative marketing strategy, perhaps by allowing qualitative, free-form response, and (3) that developers should themselves determine the groups least likely to apply to the development, and HUD should send out a memo banning other agents like housing authorities from limiting developers with pre-completed, read-only analysis on forms. A recommendation is also made for space to be allowed for a link to a survey on newest AFHMP forms for further work to be conducted by approved researchers. To support a long-term feedback mechanism for policy relevance, an exploration of adaptive regulations to govern fair marketing is presented.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cybersecurity Risk Assessment Matrix (CRAM): A System-Theoretic Approach to Balancing Operational and Cybersecurity Risk in the Management of Transient Cyber Assets (TCA) in the Maintenance of Operational Technology (OT)</title>
<link href="https://hdl.handle.net/1721.1/152760" rel="alternate"/>
<author>
<name>Nurthen II, John Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152760</id>
<updated>2023-11-03T04:02:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Cybersecurity Risk Assessment Matrix (CRAM): A System-Theoretic Approach to Balancing Operational and Cybersecurity Risk in the Management of Transient Cyber Assets (TCA) in the Maintenance of Operational Technology (OT)
Nurthen II, John Michael
Less than 10 years ago, cyber security of critical infrastructure was a topic of interest in various circles of focused technical subject matter expertise. Today, it has become a mainstream topic of discussion all too often highlighted by large scale incidents with global visibility and impact such as Stuxnet, Triton, the Colonial Pipeline, or the multiple Russian cyber-attacks on Ukraine. Highlighted by President Joe Biden’s Executive Order on Improving the Nation’s Cybersecurity issued 12 May 2021, and solidified in the March 3rd 2023 release of the brand new National Cybersecurity Strategy, deliberate action and improvement has been demanded at the highest levels of the Federal Government.&#13;
&#13;
Although the digital revolution has established its presence in the automation, oversight, and management of critical facilities and utility systems, the knowledge gap between the management of the mechanical and digital platforms remains significant.  This exposes a critical vulnerability in the oversight of electromechanical processes such as those used to control utility systems, machinery, and industrial processing; often referred to as Operational Technology (OT). OT, by way of delivering its fundamental value amongst the systems and environments in which it operates, demands both routine and non-routine maintenance and repair.  Increasingly often, the required maintenance/repair cannot proceed without the introduction and use of an electronic device (e.g. to run diagnostics, troubleshoot error codes, update OT firmware/software, test and balance, etc).  While not a ubiquitous term amongst all infrastructure industries, the North American Electric Reliability Corporation (NERC) defines the electronic device in this scenario as a Transient Cyber Asset (TCA).  &#13;
&#13;
The introduction of a TCA to the FRCS/OT ecosystem is a well-known and significant threat vector.  In this scenario, there are multiple actions that can be taken to mitigate the cybersecurity risk introduced by the TCA, but the solution is entirely dependent on the time, resources, and capabilities available in that specific location.  Increasingly often, the electronic device required for the maintenance/repair is untrusted and operated by a technician focused on the operational need of the maintenance/repair.  Notably, this scenario requires a field level decision to be made by a non-IT professional (e.g. a Facility Manager) that must consider the tradeoff between the operational need of the maintenance/repair and the cybersecurity risk associated with the use of the untrusted device.&#13;
&#13;
Through literature review and subject matter expert interviews in conjunction with the Department of Defense, MIT Lincoln Laboratories, Cyber Security at MIT Sloan (CAMS), and private industry, this thesis offers an attempt at providing a repeatable, tailorable, risk-based decision framework referred to as CRAM (Cybersecurity Risk Assessment Matrix) that incorporates both cybersecurity risk and operational risk associated with a given maintenance/repair scenario, in an effort to provide facility managers in the field a reliable tool to assist in the timely assessment and risk mitigation of day-to-day operations and maintenance conducted by outside contractors with untrusted electronics.&#13;
 &#13;
This thesis aims to provide a rudimentary framework to aid in the determination of how much risk is acceptable in order to maintain operations, and how can decision makers in this space make sensible, informed, Cybersafe decisions on a routine basis.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fostering Well-Being: Designing Technology to Improve the Psychological Well-being of Foster-Involved Youth</title>
<link href="https://hdl.handle.net/1721.1/152759" rel="alternate"/>
<author>
<name>Kumar, Ila Krishna</name>
</author>
<id>https://hdl.handle.net/1721.1/152759</id>
<updated>2023-11-03T03:52:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Fostering Well-Being: Designing Technology to Improve the Psychological Well-being of Foster-Involved Youth
Kumar, Ila Krishna
Over 600,000 youth in the United States experience abuse or neglect each year. Youth who are deemed to be at risk of significant harm in their homes are often removed and placed in a temporary housing situation known as foster care. Despite this system’s goal of supporting youth, research suggests that foster care can negatively impact youths’ ability to heal and develop the skills they need to reach their goals and avoid future traumatic situations. Given that very little has been done to explore how technology might be able to help youth heal and learn coping skills, this project aimed to explore if and how internet-connected technologies (such as smartphones and computers) might be able to support the psychological well-being of youth in and transitioning out of the foster care system. We approached these questions in three phases. In Phase 1, we conducted broad, semi-structured interviews with 16 current and former foster-involved youth to understand their experience and explore if and how technology could promote psychological well-being for foster-involved youth. Through this phase, we learned that young people are especially concerned about the lack of social support youth have in foster care and see opportunities for peer-to-peer technology to fill this need. In Phase 2, we built off these findings by prototyping and testing multiple peer-to-peer support app designs with 24 current and former foster-involved youth. Through this iterative process, we identified that a community-based, reflective check-in system might allow youth to give and receive most types of social support in a safe and comfortable environment. Finally, in Phase 3, we tested this system through a two-week mixed-methods pilot study with 15 current and former foster-involved youth, collecting data to suggest that this type of interface can provide youth with multiple types of social support and thereby improve their psychological well-being.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>WACO: Learning workload-aware co-optimization of&#13;
the format and schedule of a sparse tensor program</title>
<link href="https://hdl.handle.net/1721.1/152757" rel="alternate"/>
<author>
<name>Won, Jaeyeon</name>
</author>
<id>https://hdl.handle.net/1721.1/152757</id>
<updated>2023-11-03T03:42:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">WACO: Learning workload-aware co-optimization of&#13;
the format and schedule of a sparse tensor program
Won, Jaeyeon
Leveraging the existence of the large number of zeros in sparse tensors offer a powerful way to solve complex problems efficiently in many applications. However, optimizing the performance of those applications poses a challenge. Sparse tensor programs must find the ideal balance between data format and implementation strategy to achieve optimal performance.&#13;
&#13;
This thesis presents WACO, a novel method of co-optimizing the format and schedule of a given sparsity pattern in a sparse tensor program. A core challenge in this thesis is the design of a lightweight cost model that accurately predicts the runtime of a sparse tensor program by considering the sparsity pattern, the format, and the schedule. The key idea in addressing this is exploiting a sparse convolutional network to learn meaningful features of the sparsity pattern and embedding a coupled behavior between the format and the schedule using a specially designed schedule template. In addition, within the enormous search space of co-optimization, our novel search strategy, an approximate nearest neighbor search, efficiently and accurately retrieves the best format and schedule for a given sparsity pattern.&#13;
&#13;
We evaluate WACO for four different algorithms (SpMV, SpMM, SDDMM, and MTTKRP) on a CPU using 726 different sparsity patterns. Our experimental results shows that WACO outperformed four state-of-the-art baselines, Intel MKL, Formatonly auto-tuner, TACO with a default schedule, and ASpT. Compared to the best of four baselines, WACO achieved 1.43×, 1.18×, 1.14×, and 1.27× average speedups on SpMV, SpMM, SDDMM, and MTTKRP, respectively.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Urgency of Presence: Designing Healing Community Spaces After Displacement</title>
<link href="https://hdl.handle.net/1721.1/152756" rel="alternate"/>
<author>
<name>Teng, Melissa Q.</name>
</author>
<id>https://hdl.handle.net/1721.1/152756</id>
<updated>2023-11-03T04:06:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Urgency of Presence: Designing Healing Community Spaces After Displacement
Teng, Melissa Q.
Named for its proximity to the intersection of Massachusetts Avenue and Melnea Cass Boulevard, “Mass. and Cass” is an informal neighborhood in Boston that is often described in the news with disaster-tinged language like “epicenter” and “tent city”. After this neighborhood was declared a “public health crisis”, the City of Boston made major investments into constructing and bolstering permanent supportive housing and other much-needed services. But when we sat with its unhoused, drug-using, and outreach communities on the ground, they described parallel investments in militarized public spaces, an exclusionary neighborhood planning process, and stigmatizing media stories that overemphasize the neighborhood’s crime and violence. Most narratives about “Mass. and Cass” ignore these structural oppressions, exemplifying how current “solutions” to homelessness are less concerned with the well-being of unhoused people and more with their disappearance from public space. In response, our art collective See You In The Future has been working with community members of “Mass. and Cass” and poor people’s movements to research how histories of crisis and displacement connect with current anti-homeless policies, and to collectively imagine what healing community spaces might feel like. Centering the wisdom and lived experiences of residents and staff—and informed by liberatory and loving philosophies like harm reduction, disability justice, and abolition—we offer four spatial design values: belonging, care, hope, and growth. As our project is ongoing, this document shares our work thus far: our methods rooted in seeing and solidarity; research on the creative labor of maintaining community spaces despite policy interventions; practical notes on designing workshops and a mural; and finally reflections on presence and solidarity as outside artists and designers. Because we are focusing on community stories, which are in some sense infinite, I present our work as a series of essays to emphasize the indeterminate, character-led, and emotional nature of our methods and findings. My hope is this reads like a walk, where our feet stay planted on the ground and the humanity of community members never leaves our sight.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Age-Inclusive Design Framework for  On-Demand, Shared Autonomous Vehicles</title>
<link href="https://hdl.handle.net/1721.1/152755" rel="alternate"/>
<author>
<name>Hong, David</name>
</author>
<id>https://hdl.handle.net/1721.1/152755</id>
<updated>2023-11-03T04:10:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Age-Inclusive Design Framework for  On-Demand, Shared Autonomous Vehicles
Hong, David
The often repeated promise of autonomous vehicles is to make transportation safer, cleaner, more accessible, and convenient - in particular for vulnerable and underserved groups, such as older adults and people using mobility devices. This future, however, is far from guaranteed; rather, it must be paved by a number of stakeholders, including at minimum, first those who have been traditionally underserved, as well as designers, AV makers and operators, policy makers and regulatory authorities. If we do not carefully study the mobility needs of users – young and old – and design to meet them, we stand to repeat the same fate of in-accessibility in new mobility as in the case of transportation network companies (TNCs). The time to think about age-inclusive design is now for AVs, and I make a case for this here. This thesis explores the following questions: ‘How can we imagine a fully autonomous future if we do not have a viable transportation pathway for younger children and older adults?’ ‘What challenges might users of mobility devices (e.g., rollators, baby strollers) face in using driverless vehicles with hitherto unseen form factors?’ ‘What spatial allowances and features should vehicle designers consider when re-imagining the interior space of autonomous vehicles?’ The study analyses user needs, questions, and suggestions across ten (10) vehicle touchpoints, and presents a series of recommendations aimed for design, operation, policy, regulation, and institutional reform.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Municipal Bonds for Financing India's Urban Infrastructure:                                                      The Case of Indore</title>
<link href="https://hdl.handle.net/1721.1/152754" rel="alternate"/>
<author>
<name>Gangamreddypalli, Lakshmi</name>
</author>
<id>https://hdl.handle.net/1721.1/152754</id>
<updated>2023-11-03T03:42:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Municipal Bonds for Financing India's Urban Infrastructure:                                                      The Case of Indore
Gangamreddypalli, Lakshmi
To address the challenges arising from growing urbanization, local governments in India need to allocate significant funds to facilitate the development of urban infrastructure in the coming decades. The financial constraints experienced by governments at various levels, especially at the local level, underscore the need for alternative financing methods to bridge the substantial investment gap. Municipal bonds present a viable option for accessing the capital market for long-term debt to finance urban infrastructure.&#13;
&#13;
India’s history with municipal bonds dates back to the mid-1990s, yet its municipal bond market is shallow and urban local bodies remain highly underleveraged. Recent initiatives aimed at developing the municipal bond market have led to an increase in bond issues since 2017. However, this activity is very limited and few municipalities have been successful in issuing bonds. In this context, Indore’s relatively active participation in India’s municipal bond market, despite facing similar challenges as other municipalities, offers an interesting case study. This thesis analyzes Indore Municipal Corporation’s latest green bond issuance and situates it within the trajectory of municipal bond financing in India in order to understand the factors contributing to the city’s performance, and to reflect on the replicability and scalability of these factors to proximate contexts.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Facilitating Adoption of Continuous Manufacturing Platforms in the Pharmaceutical Industry</title>
<link href="https://hdl.handle.net/1721.1/152753" rel="alternate"/>
<author>
<name>Klukovich, Hope</name>
</author>
<id>https://hdl.handle.net/1721.1/152753</id>
<updated>2023-11-03T03:40:20Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Facilitating Adoption of Continuous Manufacturing Platforms in the Pharmaceutical Industry
Klukovich, Hope
Continuous manufacturing (CM) of pharmaceutical products has gained a great deal of interest over the past decade. CM promises multiple benefits to all pharmaceutical industry stakeholders; however, pharmaceutical manufacturers generally have been slow to invest in the technology and even slower to transition their manufacturing operations from batch even when a CM process would make the most sense. This thesis intends to drive the implementation of CM to augment batch manufacturing, allowing for a wider array of manufacturing tools in the pharmaceutical manufacturing enterprise by using a system-focused approach in developing a change system for small molecule pharmaceutical manufacturing, with the emergent property of an actionable framework based on systems architectural design that manufacturers can use. This research will employ the Architecting Innovative Enterprise Strategy (ARIES) Framework1 to illustrate the current and future landscapes of the pharmaceutical manufacturing enterprise. First, the problem space, specifically the environment that impacts the pharmaceutical manufacturing enterprise including stakeholders and governing agencies, will be described. Second, the envisioned future for the drug manufacturing enterprise in which the enterprise adopts CM as a dominant manufacturing process as opposed to solely batch manufacturing is examined. Finally, a framework is synthesized for the transition to CM in the pharmaceutical manufacturing enterprise derived from ARIES elements (strategy, process, organization, knowledge, products, services, information, infrastructure) nested in the previously described ecosystem and stakeholders. This framework will not be prescriptive, but also is intended to be adapted for each company’s unique business model and operational circumstances.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility of a Human Capital Digital Twin</title>
<link href="https://hdl.handle.net/1721.1/152752" rel="alternate"/>
<author>
<name>Lindstrom, Ethan</name>
</author>
<id>https://hdl.handle.net/1721.1/152752</id>
<updated>2023-11-03T04:04:40Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Feasibility of a Human Capital Digital Twin
Lindstrom, Ethan
CEOs often state that people are their most important asset, thus expressing that human capital is vital to achieving businesses’ strategic outcomes. Yet, very few would be likely to list their HR information system as a critical enabler of their business. Meanwhile, engineering disciplines have begun combining technological advances to create digital twins of important company assets, which provide unprecedented visibility into the state of these assets and can significantly improve the ability to manage them. This thesis asks if it is possible to apply those cutting-edge engineering tools (digital twins) to enhance the visibility and management of human capital. And if yes, what could such a system potentially look like?&#13;
&#13;
To get there, the thesis defines the scope and objectives of a potential human capital digital twin, analyzes existing HR systems to see if they are already digital twins, and then proposes a conceptual architecture for a skills-focused human capital digital twin. The risks (both technical and sociotechnical) of implementing such a system are then evaluated and discussed.&#13;
&#13;
The conclusion is that while creating a human capital digital twin appears to be possible, it is less clear if it is advisable. ’Technifying’ HR comes with significant risks that may outweigh the benefits. However, elements of the proposed system may still be worth adopting (or adapting), as there can be substantial benefits for employees and the business of taking a skills-based approach to managing human capital.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance characterization of functional fiber detectors: scintillating fibers with embedded photodiodes</title>
<link href="https://hdl.handle.net/1721.1/152751" rel="alternate"/>
<author>
<name>Ohstrom, E. V.</name>
</author>
<id>https://hdl.handle.net/1721.1/152751</id>
<updated>2023-11-03T03:24:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Performance characterization of functional fiber detectors: scintillating fibers with embedded photodiodes
Ohstrom, E. V.
In various operational scenarios, the effectiveness of existing radiation detection technologies is frequently hindered by limitations in terms of portability, adaptability, and cost-effectiveness. Bridging this critical gap necessitates innovative approaches, and thus, this thesis proposes a solution in the form of radiation-sensitive functional fabrics. This innovative concept involves the integration of avalanche photodiodes within scintillating fibers, thereby engendering a detector that is not only lightweight and flexible, but also remarkably affordable.&#13;
&#13;
The foundation of this methodology revolves around the utilization of a convergent thermal draw process, an intricate technique that yields millimeter-thick fibers encompassing all essential detector components. Through a series of iterative experiments, a limited number of fibers embedded with silicon photomultipliers (SiPMs) have been produced for study. The light attenuation length for each prototype of the functional fiber detectors is measured. In addition, the SiPMs used in this work have been carefully calibrated to obtain the correspondence between the measured energy and the number of photoelectrons detected by the SiPM. This allows for determination of the detection threshold of the functional fiber detectors. Furthermore, a crucial facet of this research involves the calibration of the SiPMs. This calibration process is executed to establish a precise correspondence between the energy detected and the count of photoelectrons registered by the SiPM. The calibration allows for the determination of the detection threshold of the functional fiber detectors, thus underpinning their effectiveness in radiation detection.&#13;
&#13;
After obtaining a clear understanding of the performance, future plans include more complicated multi-fiber arrays and fabrics. The potential applications of functional fiber detectors include identification of unknown radioactive sources, wearable detectors for warfighters and first responders, and flexible detector arrays for arms control applications.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Architecting an Upstream Oil and Gas Enterprise for Innovation</title>
<link href="https://hdl.handle.net/1721.1/152750" rel="alternate"/>
<author>
<name>Dargis, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/152750</id>
<updated>2023-11-03T03:44:09Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Architecting an Upstream Oil and Gas Enterprise for Innovation
Dargis, Justin
The oil and gas industry’s rapidly evolving and dynamic nature has historically led to significant volatility within the energy sector. Upstream oil and gas enterprises, in particular, need to catch up in adapting and adjusting their corporate strategies to embrace cleaner and more sustainable practices in oil and gas production. This highlights the critical need for enterprise transformation that fosters innovation and positions companies as leaders in the industry.&#13;
&#13;
Creating an innovative upstream oil and gas enterprise requires a fundamental shift in the fabric that has traditionally defined success in the industry. Unfortunately, the conventional work processes and procedures have proven to be ill-suited for adapting to the changing times and evolving societal pressures. The industry’s heavy reliance on external parties and their prescribed processes further contributes to the rigidity and impedes a focus on innovation.&#13;
&#13;
To address these challenges, the ARIES methodology includes insights gathered from interviews to lay the groundwork for designing a flexible enterprise promoting collaboration and innovation. Various evaluative strategies assessed different enterprise concepts and attributes, including applying Multi-Attribute Utility, Tradespace analysis, the Pugh Matrix, and Weighted SWOT analysis.&#13;
&#13;
By embracing a more flexible and innovative approach, upstream oil and gas enterprises can break free from the constraints of traditional practices and position themselves at the forefront of industry transformation. This shift will enable them to navigate the ever changing landscape more effectively and contribute to sustainable and responsible oil andgas production in alignment with societal and environmental expectations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Coast Guard Aviation &amp; the Assignment Problem: &#13;
An Auction Model to Allocate the Future 'All-Jayhawk' Fleet</title>
<link href="https://hdl.handle.net/1721.1/152748" rel="alternate"/>
<author>
<name>Ensley, Kyle L.</name>
</author>
<id>https://hdl.handle.net/1721.1/152748</id>
<updated>2023-11-03T03:47:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Coast Guard Aviation &amp; the Assignment Problem: &#13;
An Auction Model to Allocate the Future 'All-Jayhawk' Fleet
Ensley, Kyle L.
As the US Coast Guard (CG) prepares to transition from a mixed rotary wing fleet of MH-65 Dolphins and MH-60 Jayhawks to an ‘All-Jayhawk’ fleet, an opportunity is presented to seek an optimized set of aircraft assignments, prior to making capital facilities investments.  Through more optimized assignments, the CG can achieve better mission value at cost.  The objective of this thesis is to build a model to aid the CG in making rotary wing aircraft basing and satellite unit organizational decisions as it transitions to an ‘All Jayhawk’ fleet of 127 aircraft, by building a model that can tradeoff between geographic coverage and cost.  The decision to assign Jayhawks to different aviation locations will be assessed under the auspices of the ‘Assignment Problem,’ the combinatorial optimization problem of assigning two sets of elements to each other, while seeking optimization for greater metrics.  Optimization will be sought with an auction technique, one solution to the Assignment Problem.&#13;
&#13;
This thesis will begin with a historical review of the CG’s rotary wing fleet and aviation facilities since the CG first created an aviation program in 1916.  This review will showcase trends and possible correlation between increasing rotary wing aircraft ranges, reductions in full-service Air Stations, and growth in satellite aviation facilities used to forward deploy aircraft.  This thesis will then break down these different Aviation Support Constructs by Architectural Decisions and model them with Design Structure Matrices to better understand differences and cost drivers.  The Architectural Decisions will be used to build a model that estimates the total cost of the Jayhawk fleet’s global assignment to any mix of 39 locations under four Support Constructs.  Ten years of CG mission data and aircraft capability range rings will be overlaid in GIS software, to visualize and quantify where CG missions are required, and which air stations are most valuable.  Six Assignment Problem Auctions will then be conducted with differing objective criteria to seek a best identifiable set of global assignments for the Jayhawk fleet, with metrics including mission coverage percent and the Net Present Value cost of the assignment set over the fleet’s lifespan.  &#13;
&#13;
This analysis and the six auctions will show the relationship between geographic mission coverage and costs and will suggest a Pareto front to showcase a short list of sets of global Jayhawk assignments for consideration by the CG.  Auction B will be performed with the objective criteria to seek the lowest cost set of fleet assignments while still achieving the threshold mission coverage rate.  Auction B’s result will be proposed as the best-identifiable result, achieving the baseline mission coverage percent with only 14 aviation locations, 25 fewer than the status quo, and 36% less expensive than the CG’s notional plan.  Following demonstration of this technique, it will be proposed for use by the CG, to be adapted with refined objective criteria, to seek an optimal set of global assignments for the future All-Jayhawk fleet.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Innovation in Technical Teams: A Study of Design Thinking and Systems Architecture Integration</title>
<link href="https://hdl.handle.net/1721.1/152747" rel="alternate"/>
<author>
<name>Anderson, Warren V.</name>
</author>
<id>https://hdl.handle.net/1721.1/152747</id>
<updated>2023-11-03T03:19:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing Innovation in Technical Teams: A Study of Design Thinking and Systems Architecture Integration
Anderson, Warren V.
With increasing discoveries in technology and new emerging markets, large enterprises and agencies see a rising demand for innovation. As a result, the roles and responsibilities of technical experts, including engineers and scientists, in these organizations are growing. Technical experts are being pushed to expand their capabilities beyond solution evaluation and into divergent concept exploration space. Additional tools and skills support are needed to assist these technical experts in this new approach.&#13;
&#13;
NASA's Aeronautics Research Mission Directorate (ARMD) is dedicated to transforming aviation to meet the nation's and the world's future needs. The Convergent Aeronautics Solution (CAS) project was developed to accelerate ARMD's innovation capabilities. The CAS project is designing a bespoke innovation framework that fits its culture and mission through human-centered design and leveraging tools from systems architecture to address complex societal problems through aviation. This thesis investigates how to influence the technical experts using the CAS project as a case study in addition to interviews conducted with team members. &#13;
&#13;
This real-world case study provided a unique opportunity to observe a large agency. This thesis discusses three insights that emerged from this research into how to support new technical teams during ideation. First, embrace the natural tendency of technical experts to generate concepts. While systems architecture and human-centered design prescribe exploring the problem before developing concepts, it is better to make some space for the technical experts to propose ideas. Second, concept generation and the ideation process can benefit from an experienced facilitator(s) to help keep the team in a generative mindset. Teams new to the ideation process need assistance while they gain experiential learning of this new approach. Finally, this early lifecycle exploration of the problem and the stakeholder's needs can be ambiguous and challenging. The tools and methods of human-centered design and systems architecture can help structure the approach for problem formulation, interpreting the stakeholder's needs, and generating transformative solutions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Industry Platforms: Case Studies to Measure Platform Capabilities for US Unicorns</title>
<link href="https://hdl.handle.net/1721.1/152746" rel="alternate"/>
<author>
<name>AlSadah, Yousif Fayez</name>
</author>
<id>https://hdl.handle.net/1721.1/152746</id>
<updated>2023-11-03T03:32:35Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Industry Platforms: Case Studies to Measure Platform Capabilities for US Unicorns
AlSadah, Yousif Fayez
Large-sample empirical research by Cusumano et. al. found that US privately-held unicorns with platform capabilities command on average 123% premium over non-platforms. However, measuring the extent to which a company is platform or non-platform based is a difficult problem given the complexities of business organizations and how these activities interact with each other in non-linear ways.  &#13;
&#13;
This thesis attempts to address this by proposing a systems thinking, case-based approach to evaluate the key business activities of a firm with potential platform capabilities using the author’s proposed Platform Classification Matrix on five of the largest US privately-held firms: Epic Games, Databricks, Plaid Technologies, Stripe, and Instacart. Each business activity for a firm is classified as platform or nonplatform, and if it is a platform then it is assessed based on its revenue contributions to the firm and three strength metrics: Network effects, strength against multihoming, and new entrant deterrence. This matrix generates a ‘platform strength’ metric and allows identification of the platform activity with the most potential towards a winner take all or most case.&#13;
&#13;
The author further proposes combining this matrix with a system dynamic approach to identify how differing business activities can boost or hinder the leading platform service which allows decision makers to assess whether retaining or subsidizing seemingly low-performing business lines is strategic for their leading platform.&#13;
&#13;
The thesis concludes by advocating for using both methods as well as the generated metrics to perform a holistic analysis when evaluating firms with platform capabilities potential.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Achieving Robustness and Generalization in MARL&#13;
for Sequential Social Dilemmas through Bilinear&#13;
Value Networks</title>
<link href="https://hdl.handle.net/1721.1/152745" rel="alternate"/>
<author>
<name>Ma, Jeremy</name>
</author>
<id>https://hdl.handle.net/1721.1/152745</id>
<updated>2023-11-03T03:54:33Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Achieving Robustness and Generalization in MARL&#13;
for Sequential Social Dilemmas through Bilinear&#13;
Value Networks
Ma, Jeremy
This thesis presents a novel approach for training multi-agent reinforcement learning (MARL) agents that are robust to different unforeseen gameplay strategies in sequential social dilemma (SSD) games. Recent literature has demonstrated that reward shaping can not only be used to enable MARL agents to discover diverse, human-interpretable strategies with emergent qualities, but also help alleviate the issue in conventional actor-critic methods that tend to converge to suboptimal Nash equilibria in SSD games. However, agents trained through self-play typically converge and overfit to a singular Nash equilibrium. Consequently, these agents are limited to executing the specific strategy they have converged to during training, which renders them ineffective when faced with opponents employing commonly-used strategies such as tit-for-tat. This thesis proposes a method that employs a bilinear value critic that can learn an adaptive and robust strategy in SSD games through self-play with randomized reward sharing. We evaluate the efficacy of this approach on “prisoner’s buddy,” an iterated three-player variant of the prisoner’s dilemma game. Our results show that the bilinear value structure helps the critic generalize over the reward sharing manifold and leads to an adaptive agent with emergent qualities such as reputation. The results of this research highlight the ability of MARL agents to learn a general high-level policy that can effectively socialize with agents with different strategies in SSD games, despite being trained through self-play. The proposed method is scalable and has the potential to be applied to a wide range of multi-agent competitive-cooperative environments, providing insights into the design of MARL algorithms for solving social dilemmas.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Adversarial Learned Soups: neural network averaging for joint clean and robust performance</title>
<link href="https://hdl.handle.net/1721.1/152744" rel="alternate"/>
<author>
<name>Huang, Brian</name>
</author>
<id>https://hdl.handle.net/1721.1/152744</id>
<updated>2023-11-03T03:31:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Adversarial Learned Soups: neural network averaging for joint clean and robust performance
Huang, Brian
To make computer vision models more adversarially robust, recent literature has made various additions to the adversarial training process, from alternative adversarial losses to data augmentations to the usage of large numbers of diffusion-generated synthetic samples. However, models trained for adversarial robustness often face an inherent tradeoff between performance on clean images and performance against adversarial attacks. Methods that primarily seek to boost adversarial robustness may not optimize for the best combined performance along the clean-vs.-adversarial tradeoff. We devise a method to finetune adversarially trained models for combined clean and robust performance, borrowing from the method of "model soups," where parameters within an ensemble of finetuned checkpoints are averaged to form new model weights. Such model soups have been shown to improve performance in transfer learning settings while maintaining or improving the original task performance; extending from this observation, we find that linear interpolation of adversarially robust ensemble parameters reaps similar benefits in the tradeoff between robustness and clean accuracy. Furthermore, we construct a wrapper architecture, or "learned soup," to adversarially train our interpolation coefficients for model soups, and find that, in some cases, directly training the souping coefficients leads to a more robust model than grid-searching for the coefficients. This method of adversarial learned soups can be applied in conjunction with existing methods for adversarial training, further bolstering the current arsenal of defenses against adversarial attacks.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>LoopTree: Enabling Systematic and Flexible&#13;
Exploration of Fused-layer Dataflow Accelerators</title>
<link href="https://hdl.handle.net/1721.1/152743" rel="alternate"/>
<author>
<name>Gilbert, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152743</id>
<updated>2023-11-03T04:05:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">LoopTree: Enabling Systematic and Flexible&#13;
Exploration of Fused-layer Dataflow Accelerators
Gilbert, Michael
Deep neural network (DNN) accelerators exploit data reuse to reduce memory traffic. Typically, DNN accelerators exploit data reuse within layers. However, there is also reuse between layers. To exploit this inter-layer reuse opportunity, fused-layer dataflow accelerators tile and buffer intermediate data between layers on-chip to benefit from inter-layer reuse while minimizing buffer size. To further minimize buffer space requirement, some fused-layer dataflows also propose not buffering part of the tile at the cost of recomputation of the unbuffered data.&#13;
&#13;
The design space of fused-layer dataflow accelerators is large, but prior work only considers a subset of the design space. Prior works are limited in a number of ways: (1) tiling only in certain dimensions, leaving some designs unexplored; (2) limited choices of reuse/recompute which are applied uniformly to all layers, leading to increased recomputation; (3) not exploring the interaction of tiling and reuse/recompute choices; and (4) applying the same design choices for all layers in the DNN despite diverse layer shapes, which call for different choices.&#13;
&#13;
To address these limitations, we propose (1) a more extensive design space, (2) a taxonomy that introduces structure into the design space, and (3) a fast, flexible, analytical model, called LoopTree, to evaluate the latency, energy consumption, buffer space requirements, and bandwidth requirements of designs in this design space. Finally, we present case studies enabled by LoopTree that show how exploring this larger space results in designs that require less buffer space (e.g., up to 7.6× buffer space reduction for the same off-chip transfers).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Application of Graph of Convex Sets Trajectory Optimization to the Marine Robotics Domain</title>
<link href="https://hdl.handle.net/1721.1/152742" rel="alternate"/>
<author>
<name>Largaespada, Raul Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152742</id>
<updated>2023-11-03T03:25:41Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">An Application of Graph of Convex Sets Trajectory Optimization to the Marine Robotics Domain
Largaespada, Raul Alexander
Autonomous unmanned surface vehicles (USVs) and unmanned underwater vehicles (UUVs) are becoming ubiquitous in applications exploring marine environments, and the design of path planning algorithms for these vehicles remains an open area of research. For marine environments, to save on energy a path between two points should be optimized to minimize distance traveled while remaining smooth to reduce changes in speed and account for the dynamic limits of the vehicle.&#13;
&#13;
The Graphs of Convex Sets (GCS) trajectory optimization motion planner from the MIT Robot Locomotion Group is a recently developed planner which has been demonstrated to return smooth and optimal paths navigating around complex environments filled with obstacles, but this planner has not been applied to marine environments. The early successes of the GCS planner and the smoothness of the trajectories returned suggest that GCS could be effectively applied to USV and UUV path planning.&#13;
&#13;
This project implemented the GCS planner as part of the MOOS-IvP so software suite for autonomous marine robotics. The robustness of the trajectories returned from GCS was evaluated via Monte Carlo trials on a simulated USV traversing a field of randomized known and unknown obstacles. The performance of GCS was compared against alternate planners implementing the D* Lite algorithm or relying only on existing MOOS-IvP obstacle avoidance capabilities, running the the same simulation environment.&#13;
&#13;
In testing, the GCS planner was not as successful as the D* Lite planner in navigating dense obstacle fields, but returned smoother and shorter paths than D* Lite which were easier for the vehicle to follow. Testing also suggested future modifications to the GCS planner which could be added to further increase its robustness when applied to USVs operating in dense obstacle fields.&#13;
&#13;
All code developed for this project may be found at: https://github.com/rlargaespada/moos-ivp-monte-carlo.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sediment Erosion and Deposition Within Mangrove Forests</title>
<link href="https://hdl.handle.net/1721.1/152741" rel="alternate"/>
<author>
<name>Deitrick, Autumn Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/152741</id>
<updated>2023-11-03T03:29:37Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Sediment Erosion and Deposition Within Mangrove Forests
Deitrick, Autumn Rose
Mangroves are highly productive ecosystems that sequester carbon in their own biomass and by trapping carbon-rich sediment imported from outside the forest and deposited in the forest. Aboveground biomass, like mangrove pneumatophores (i.e., aerial roots), creates conditions that facilitate sediment deposition by enhancing drag and slowing currents near the bed. However, pneumatophores also generate turbulence that enhances turbulent kinetic energy (TKE), which can promote sediment resuspension. Two studies were conducted to better understand the impacts of pneumatophore-generated turbulence on sediment transport. The first study investigated whether pneumatophore-generated turbulence impacted the erosion threshold and rate of natural cohesive sediment collected from a black mangrove habitat. Sediment cores with intact belowground and aboveground biomass were placed in a recirculating channel. Pneumatophores were removed from one side of each core. Each side of the core, with and without pneumatophores, was separately exposed to the same sequence of channel velocities. Although the presence of pneumatophores significantly enhanced the turbulence in the channel, the bed stress, threshold for sediment resuspension, and rate of sediment erosion were similar for the bare and vegetated sides of each core. This result differs from non-cohesive sediments, for which pneumatophore-generated turbulence has been found to increase erosion rates. The second study considered deposition. Laboratory experiments measured TKE and net deposition of non-cohesive sediment in bare and vegetated channels. For the same velocity, as pneumatophore density increased, TKE increased and net deposition decreased. The impact of TKE on deposition was described in terms of a deposition probability model. This model was used to predict deposition over a range of typical mangrove field conditions, which indicated that pneumatophore-generated turbulence can facilitate the delivery of sediment farther into the mangrove forest. Understanding how pneumatophores impact the balance of the competing processes of deposition and erosion is critical for improving the assessment and modelling of sediment retention and carbon storage in mangrove forests.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Assessing the technical feasibility of converting U.S. salt&#13;
caverns used for natural gas storage into hydrogen&#13;
storage facilities</title>
<link href="https://hdl.handle.net/1721.1/152740" rel="alternate"/>
<author>
<name>Paca, Edgar</name>
</author>
<id>https://hdl.handle.net/1721.1/152740</id>
<updated>2023-11-03T03:27:30Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Assessing the technical feasibility of converting U.S. salt&#13;
caverns used for natural gas storage into hydrogen&#13;
storage facilities
Paca, Edgar
The 2015 Paris Agreement laid the foundation for the current momentum in renewable energy, leading to a significant increase in availability. According to the International Energy Agency (IEA), by 2022, the world was on track to add nearly 2,400 GW of renewable energy in the next five years, equivalent to what was achieved in the past two decades. This investment in renewables aims to reduce global emissions and limit temperature rise to below 1.5 degrees Celsius by 2050.&#13;
&#13;
Hydrogen is emerging as a crucial energy carrier, particularly for challenging decarbonizing sectors, such as heavy-duty transportation, cement production, iron and steel manufacturing, chemicals, and building materials. While progress has been groundbreaking in wind and solar energy, the issue of large-scale energy storage remains persistent. The intermit-tent nature of wind and solar power requires a storage medium capable of handling seasonal variations, similar to underground salt caverns used as natural gas reservoirs since 1961.&#13;
&#13;
In light of these challenges, this thesis examines the possibility of repurposing existing U.S. natural gas storage salt caverns into hydrogen storage facilities. By exploring this approach, we can utilize the established infrastructure and leverage the extensive knowledge gained from decades of natural gas storage. This can potentially accelerate the adoption of hydrogen as a clean and sustainable energy alternative.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Advancing Industrial Decarbonization: Evaluating the State of Organic Membrane Technology for Class-Based Hydrocarbon Separation</title>
<link href="https://hdl.handle.net/1721.1/152739" rel="alternate"/>
<author>
<name>Cochran, Corinne S.</name>
</author>
<id>https://hdl.handle.net/1721.1/152739</id>
<updated>2023-11-03T03:55:13Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Advancing Industrial Decarbonization: Evaluating the State of Organic Membrane Technology for Class-Based Hydrocarbon Separation
Cochran, Corinne S.
Industrial organic chemical separations are major contributors to carbon dioxide (CO$_2$) emissions in the energy industry, contributing to global temperature rise. As membrane separations have successfully reduced energy requirements in water purification and desalination, their application in organic separations, a challenging area to decarbonize, is gaining attention. With gas phase organic membrane separations being installed at the industrial scale, liquid separations remain as the next frontier for industrial decarbonization.&#13;
&#13;
This thesis begins with an exploration of the history and current developments in membrane technology, focusing on enhancing membrane applications for liquid organic hydrocarbon separations. The objective is to showcase technological advancements that address the existing limitations of semi-permeable membrane systems in organic liquid hydrocarbon separation processes.&#13;
&#13;
Then this work presents a first-order thermodynamic screening method to determine the suitability of membrane separations for different liquid separation processes in a refinery. The method is specifically applied to a data set of gasoline and lighter hydrocarbon separations executed following a fluid catalytic cracking (FCC) operating unit.&#13;
&#13;
First, the findings highlight that non-polymer based membrane materials offer improved durability and performance. Second, preferential separations for liquid organic membrane applications involve feed compositions with a higher percentage of material above the intended molecular weight cut-off (MWCO) for separation. Third, an effective combination of membrane and traditional distillation separation methods in brownfield constructions is observed in mitigating distillation overhead limitations.&#13;
&#13;
Lastly, this work identifies areas for improvement and recommends technological advancements to further the industrial adoption of semi-permeable membrane installations, enhancing the potential for widespread implementation and significant environmental impact.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Play taxonomies: A toy index for product design</title>
<link href="https://hdl.handle.net/1721.1/152737" rel="alternate"/>
<author>
<name>Rossikopoulou Pappa, Styliani</name>
</author>
<id>https://hdl.handle.net/1721.1/152737</id>
<updated>2023-11-03T03:02:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Play taxonomies: A toy index for product design
Rossikopoulou Pappa, Styliani
This research delves into the diverse landscape of play categorizations, spanning historical foundations to contemporary perspectives, with a focus on its significance in toy design. Drawing insights from prominent scholars and classification frameworks, this study introduces an approach to be used during the toy design process and nurturing constructive design critique. &#13;
&#13;
Beginning with Johan Huizinga's foundational dual classification of play, the groundwork is laid for comprehending the contest and representation forms of play. Jean Piaget's developmental viewpoint is explored next, underscoring the progressive nature of play categories and their pivotal role in children's cognitive and social development. Roger Caillois's taxonomy illuminates the spectrum of play types, uncovering the intricate interplay between human behavior and culture. Sara Smilansky's observations in child development further shed light on how play influences cognitive, social, and emotional growth.&#13;
&#13;
A comprehensive toy product index emerges as a central outcome, offering a structured framework for evaluating and categorizing toy products, transcending traditional play value assessments. By encompassing attributes such as affect, miniaturization, assembly, simulation, craft, education, event-oriented toys, and collectibles, the index equips toy designers, educators, and users explore, compare, and critique products. The study details a methodical approach to data collection, categorization, database construction, and validation, while acknowledging inherent limitations and envisioning future refinements. &#13;
&#13;
Ultimately, this study aims to bridge the gap between theoretical play classifications and their practical implications in design, to enhance the toy design process and foster a culture of informed design critique. By intertwining play categorizations with innovative design methodologies, this research aims to provide a deeper understanding of toys’ significance in material culture. The toy product index emerges as a useful tool, promoting informed exploration, collaborative ideation, and innovative thinking within the realm of toy design.&#13;
&#13;
Keywords: play categorizations, toy taxonomy, play attributes, affect, miniaturization, assembly, simulation, craft, education, event-oriented toys, collectibles, toy product index.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustic Minimization of Ocean Twilight Zone Vehicle, Mesobot</title>
<link href="https://hdl.handle.net/1721.1/152736" rel="alternate"/>
<author>
<name>Davis, Cameron J.</name>
</author>
<id>https://hdl.handle.net/1721.1/152736</id>
<updated>2023-11-03T03:24:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Acoustic Minimization of Ocean Twilight Zone Vehicle, Mesobot
Davis, Cameron J.
The ocean’s twilight zone (OTZ) is one of the most unexplored regions of the Earth’s oceans.&#13;
The OTZ is defined as the region of the water column between 200 and 1,000 meters in&#13;
depth. It plays a vital role in the global carbon cycle, pushing carbon from the surface layer&#13;
into the deep ocean. It has a very diverse population of fauna, known and unknown, that&#13;
migrate up and down the water column to feed and reproduce. The migration pattern occurs&#13;
based on the amount of radiated sunlight into the water column. The mid-water column&#13;
vehicle, Mesobot, was designed to mimic the migration patterns of mesopelagic organisms.&#13;
Unmanned Underwater Vehicles (UUVs) have become a staple of ocean exploration for years,&#13;
going where man is not able to. Although much quieter than noise from shipping traffic,&#13;
the noise radiated from Mesobot could present potential for error in observation, tracking,&#13;
and sampling. In this thesis, I have analyzed the effect of commutation methods and&#13;
propeller design on the acoustic noise radiated from a single BlueRobotics T200 thruster.&#13;
The propeller design choices are a standard three-blade propeller and a three-blade toroidal&#13;
propeller. The commutation methods analyzed are trapezoidal control and field-oriented&#13;
control. After analyzing four different alternatives, quantitative evidence was found to&#13;
recommend using field-oriented control as the commutation scheme to minimize the radiated&#13;
noise from the thrusters on Mesobot. The radiated noise from the thurster was dominated by&#13;
motor noise, and no conclusive evidence was found to recommend the three-blade propeller&#13;
over the toroidal propeller.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Queueing System Analysis in Oil and Gas Abandonment Operations</title>
<link href="https://hdl.handle.net/1721.1/152731" rel="alternate"/>
<author>
<name>Monnig, Jonathan R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152731</id>
<updated>2023-11-03T03:24:08Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Queueing System Analysis in Oil and Gas Abandonment Operations
Monnig, Jonathan R.
Oil and Gas (O&amp;G) well abandonments are crucial in ensuring environmental and regulatory compliance in the industry. This thesis characterizes an O&amp;G well abandonment system and process using well-plugging regulations from six major hydrocarbon-producing states in the United States. A comprehensive process flow diagram depicting the O&amp;G well abandonment process is presented. The well abandonment process is further characterized as a queueing system and a queueing network model is developed. &#13;
&#13;
The study introduces four job prioritization classes to explore the impact of prioritization and priority queues on the defined system performance metrics. Due to the complexity of the system, and instances when the server capacity required exceeds the capacity available, traditional queueing equations are inadequate, necessitating the use of simulation. The simulation, implemented using Python and the SimPy library, assesses the system's behavior and efficiency. The functionality of the simulation is demonstrated through five insights that explore varying architectures of the developed queueing system, encompassing server prioritization schemes, service channel configurations, job priority compositions, review periods, and dynamic server counts.&#13;
&#13;
This analysis employs queueing theory to model the stochastic behavior of the O&amp;G well abandonment process, emphasizing the need for simulation. The resultant model identifies dominant queueing system architectures, including combined service channel configurations, priority queues, and review periods that reduce average throughput times and variability of prioritized jobs in the well abandonment process.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Direct Air Capture as a Carbon Removal Solution: Analyzing Scale-Up, Cost Reduction, and Pathways for Acceleration</title>
<link href="https://hdl.handle.net/1721.1/152729" rel="alternate"/>
<author>
<name>DiMartino, Brooke B.</name>
</author>
<id>https://hdl.handle.net/1721.1/152729</id>
<updated>2023-11-03T03:37:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Direct Air Capture as a Carbon Removal Solution: Analyzing Scale-Up, Cost Reduction, and Pathways for Acceleration
DiMartino, Brooke B.
In addition to drastic reductions in global carbon dioxide emissions, the Intergovernmental Panel on Climate Change has stated with high confidence that carbon dioxide removal will be needed to meet the Paris Agreement temperature goals. Direct air capture is a novel carbon removal technique that is gaining attention for its potential contribution to the portfolio of carbon removal solutions. As its primary barrier to deployment is high costs, there is a focus on understanding how this technology could reach lower costs by mid-century.&#13;
&#13;
This thesis uses technological change theory to investigate potential scale-up and cost reduction forecasts for existing direct air capture methods. The literature review provides context for carbon dioxide removal, direct air capture, and technological change theory. Analogous technologies are reviewed for cost-reduction drivers and compared to the common direct air capture methods. This comparison is used for learning and improvement rate analysis to estimate cost reduction forecasts for mature direct air capture methods, then used to identify levers that direct air capture stakeholders can deploy to accelerate scale-up and cost reductions.&#13;
&#13;
The results suggest solid sorbent direct air capture (S-DAC) could achieve costs of $100-$400/tonCO2 by 2050, while liquid solvent direct air capture (L-DAC) may reach $100-$220/tonCO2 in the same period. For the base assumptions investigated, S-DAC reaches the 45Q U.S. tax credit threshold in 2041 using a single-factor improvement rate analysis and in 2040 using component-based. L-DAC reaches the threshold in 2034 for single-factor and in 2037 for component-based improvement rates. Neither method reaches the threshold using a single-factor or component-based learning rate analysis under base assumptions.&#13;
&#13;
The analog analysis emphasizes the importance of a variety of direct air capture stakeholders in accelerating the technology’s scale-up and cost reductions. Policymakers can develop standards for measurement, reporting, and verification of carbon dioxide removal. The private sector can set clear requirements for carbon removal purchases focusing on proven, durable, measurable methods with clear paths for cost reductions. Direct air capture providers can focus on early design choices that enable cost reductions and work to build economies of scale in manufacturing. The findings indicate that the technology may reach cost-competitive thresholds by mid-century and that stakeholders across the direct air capture ecosystem have opportunities to accelerate this transition.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Self-Supervised Learning through Transformations in Higher Activation Space</title>
<link href="https://hdl.handle.net/1721.1/152728" rel="alternate"/>
<author>
<name>Gabrielsson, Rickard Brüel</name>
</author>
<id>https://hdl.handle.net/1721.1/152728</id>
<updated>2023-11-03T03:50:49Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing Self-Supervised Learning through Transformations in Higher Activation Space
Gabrielsson, Rickard Brüel
We introduce Deep Augmentation, an approach to data augmentation using dropout to&#13;
dynamically transform a targeted layer within a neural network, with the option to use&#13;
the stop-gradient operation, offering significant improvements in model performance and&#13;
generalization. We demonstrate the efficacy of Deep Augmentation through extensive&#13;
experiments on contrastive learning tasks in computer vision and NLP domains, where we&#13;
observe substantial performance gains with ResNets and Transformers as the underlying&#13;
models. Our experimentation reveals that targeting deeper layers with Deep Augmentation&#13;
outperforms augmenting the input data, and the simple network- and data-agnostic nature of&#13;
this approach enables its seamless integration into computer vision and NLP pipelines.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Anomaly Detection in Collider Physics via Factorized Observables</title>
<link href="https://hdl.handle.net/1721.1/152725" rel="alternate"/>
<author>
<name>Wynne, Raymond</name>
</author>
<id>https://hdl.handle.net/1721.1/152725</id>
<updated>2023-11-03T03:16:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Anomaly Detection in Collider Physics via Factorized Observables
Wynne, Raymond
To maximize the discovery potential of high-energy colliders, experimental searches should be sensitive to unforeseen new physics scenarios. This goal has motivated the use of machine learning for unsupervised anomaly detection. In this paper, we introduce a new anomaly detection strategy called FORCE: factorized observables for regressing conditional expectations. Our approach is based on the inductive bias of factorization, which is the idea that the physics governing different energy scales can be treated as approximately independent. Assuming factorization holds separately for signal and background processes, the appearance of non-trivial correlations between low- and high-energy observables is a robust indicator of new physics. Under the most restrictive form of factorization, a machine-learned model trained to identify such correlations will in fact converge to the optimal new physics classifier. We test FORCE on a benchmark anomaly detection task for the Large Hadron Collider involving collimated sprays of particles called jets. By teasing out correlations between the kinematics and substructure of jets, FORCE can reliably extract sub-percent signal fractions. This strategy for uncovering new physics adds to the growing toolbox of anomaly detection methods for collider physics with a complementary set of assumptions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for Cost Methodology Applied to High Temperature Gas-Cooled Reactors</title>
<link href="https://hdl.handle.net/1721.1/152723" rel="alternate"/>
<author>
<name>Venneri, Lorenzo</name>
</author>
<id>https://hdl.handle.net/1721.1/152723</id>
<updated>2023-11-03T03:46:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design for Cost Methodology Applied to High Temperature Gas-Cooled Reactors
Venneri, Lorenzo
SpaceX’s Falcon 9 is like a Toyota Corolla – an order of magnitude cheaper than competitor’s “high performance” rocket systems, the Ferraris, but achieving the same basic transport requirements with greater reliability and safety. Before Falcon, space launch was a Ferrari-like industry, with handmade, highly specialized, extremely expensive vehicles targeting government customers and fully complicit in the inefficiencies of government contracting. Similarly, the nuclear industry produces and still designs Ferrari-like fission reactors, with high performance metrics in terms of power density and unit power, at a megaproject scale, but with high system and operational complexity, extreme development cost, numerous part counts, and very low production and deployment rates that still require human-machine interface to meet societal safety objectives. The demand for nuclear Ferraris in the U.S., particularly within non-traditional energy utilities is very low, as few competent utilities want unique reactors with such high capital costs, running at such high power that low probability accidents can have offsite consequences. Where is the nuclear Corolla?&#13;
&#13;
In the pursuit of energy systems that are cost effective and widely deployable, this thesis specifies a nuclear reactor architecture called the Class A HTGR (CA-HTGR), where Class A refers to the passive safety class during decay heat cooling. The architecture is used in a coupled design and cost approach to search for Levelized Cost of Electricity (LCOE) minimizing designs. The feasibility and utility of this design for cost (DFC) methodology, also termed economics by design, is shown through assessment of advanced manufacturing opportunities and LCOE minimizing designs.&#13;
&#13;
Section 1 introduces the history and status quo of fission energy, providing a perspective on the stalled industry and possible paths forward, motivated by the rapid expansion and success of the space launch industry. A comparison is made between nuclear and natural gas, suggesting possible cost reductions in a rebooted nuclear industry. Because of the unusual, black swan risk associated with nuclear, the starting point for massive cost reductions and widescale deployment is a new safety paradigm that reduces risk through consequence reduction. Through a technology description and down selection in Section 2, the CA-HTGR is shown to be the most effective architecture available for reducing consequence and mitigating hazards pertinent to nuclear fission reactors.&#13;
&#13;
Nuclear reactor design has historically been a painful, one-off process with limited opportunity for optimization, iteration, and design exploration. Connections between design parameters and value functions like LCOE are often unclear or missing altogether. The wide-ranging disciplines, the timelines and development costs involved, and the barriers to change, combine to form a complex design process that often leads to siloed subsystem teams, and leaving little room for optimization, iteration, or integrated design, and more easily favors design by regulation, tradition, and sunk cost.&#13;
&#13;
As an alternative to the traditional nuclear design approach, Section 2 introduces DFC methods made up of design, cost, and search codes. Instead of one-off labor-intensive estimates, DFC aims to automate estimates over a wide range of the design space with the end goal of LCOE minimization. Section 3 presents the design code and describes the models and assumptions used to specify an HTGR concept design, including models for core energy content, power rating, reactor vessel geometry, and balance of plant. Section 4 presents the cost code which includes estimates for CAPEX, OPEX, and project LCOE. Section 5 describes model uncertainty and design rankings, discussing the utility of each and possible methods for their estimation.&#13;
&#13;
Advanced manufacturing (AM) and its potential use cases for nuclear fission are introduced in Section 6. DFC methods are used to evaluate the cost effects on an HTGR baseline. Rather than attempt detailed and high uncertainty cost estimation of advanced manufacturing methods, ranges of costs and performance factors were reported together with dependent LCOE changes. The results suggest various opportunities for AM and the utility of coupled design and cost estimation for evaluating the potential impacts of AM opportunities.&#13;
&#13;
Finally, Section 7 presents the use of DFC methods to examine the design and cost space. A wide range of cost outcomes were found through random sampling of the design space. Genetic algorithms were used to search the design space for LCOE minimizing designs, establishing the feasibility of DFC methods for HTGRs.&#13;
&#13;
The DFC methods developed and utilized for this thesis can be used to improve the delivery of cost competitive nuclear fission reactors for planet-wide deployment. DFC methods provide a system-focused approach that considers design interdependencies, allowing for optimization. The main shortcomings of the reported DFC methods include low fidelity design and cost approximations that may not match the reality of an HTGR. Optimizing on a simplified model can be useful because financial commitments are often made using similar or even simpler models. DFC methods could be used to quickly produce cost minimizing designs for a given population of end users and projects. In the future, nuclear projects can be accelerated by using DFC methods in conjunction with nuclear analysis codes, templating codes, and language models to automatically produce design and licensing documentation.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Planning for the Margins: Mapping Conceptual Implications of Profound Intellectual Disability and Informality through Slovo Park, Johannesburg</title>
<link href="https://hdl.handle.net/1721.1/152721" rel="alternate"/>
<author>
<name>Ansari, Natasha</name>
</author>
<id>https://hdl.handle.net/1721.1/152721</id>
<updated>2023-11-03T03:37:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Planning for the Margins: Mapping Conceptual Implications of Profound Intellectual Disability and Informality through Slovo Park, Johannesburg
Ansari, Natasha
Disability remains one of the most marginalized considerations within urban planning and social justice research and practice. Disability affords planning the critical conceptual lens of interdependence, moving beyond ideas of individualized independence. Interdependence is an especially salient provocation for how we live in today’s world, shaped by COVID-19 and global crises brought on by climate change, meaningful work and livable wages, generative AI, and future pandemics. This thesis focuses on the challenges of urban planning for and with people with profound and intellectual disabilities in informal and impoverished Global South contexts as an acute, but nonetheless pervasive, example of the need and precarity of interdependence. Drawing primarily from fieldwork in the informal settlement of Slovo Park, Johannesburg, this thesis aims to calibrate what it means to “plan for the margins” in situations of compounded vulnerability and resource scarcity. In doing so, it documents vitally important kin and care networks existentially challenged by neoliberal market forces. It argues that profound disability ought to be a central planning concern, informing how we transform social relations and build infrastructures of care that center deep vulnerability.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Collective Speculators’ Playbook Subversive Market Forms for Common Wealth</title>
<link href="https://hdl.handle.net/1721.1/152720" rel="alternate"/>
<author>
<name>Ofer, Tamar M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152720</id>
<updated>2023-11-03T03:44:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Collective Speculators’ Playbook Subversive Market Forms for Common Wealth
Ofer, Tamar M.
In 2016, a remarkable collaboration between the State of New York and the city’s largest landlord redirected a staggering $1.6 billion from neighborhoods hosting the highest unemployment rates along the Harlem River to Midtown's Hudson Yards, marking the costliest privatized development in American history. Through a dizzying array of financial tactics, regulatory loopholes, and spatial manipulations to fund this unprecedented development, the axis stretched between Hudson Yards and Harlem River Yards exposes a dominant state-sponsored speculative urban development model in urban New York - often operating above the designer’s head.&#13;
&#13;
This thesis draws the practices, policies, and protocols of the speculative urban development model across two urban projects. It identifies two key actors: the developer, who speculates on financial returns, the designer, who speculates on spatial forms. The gap between them can then be termed the ‘speculative spectrum’. In it, the developer, adept at capturing value but often lacking socio-spatial literacy to create them in situ, contrasts with the designer, proficient in creating values but lacking the mechanisms to capture them. This dichotomy draws the required leverage points to intervene in an existing system design that underpins prevalent forms of urban speculation, paving the way for a local cross-sector partnership in the Mott Haven-Port Morris neighborhood of Harlem River Yards.&#13;
&#13;
In collaboration with a civic coalition, uncovered and invented speculations are mobilized and reconfigured to assemble a collective ‘playbook’ for co-disciplinary forms of urban speculation. It compiles a repository of ‘plays’ into a design portfolio of spatial, economic, and political plays that re-imagine Harlem River Yards as a collective waterfront rather than an industrial wasteland. In doing so, it posits that design must not only actively reengage with current forms of real estate speculation but must strategically reposition its practices as a design project in and of itself.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Drawing as Programming Language</title>
<link href="https://hdl.handle.net/1721.1/152719" rel="alternate"/>
<author>
<name>Huang, Lingdong</name>
</author>
<id>https://hdl.handle.net/1721.1/152719</id>
<updated>2023-11-03T03:59:22Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Drawing as Programming Language
Huang, Lingdong
Drawing has always been a powerful tool for humans to communicate information and express themselves. While numerous programming languages have previously been designed to break from the traditional text-based linear approach, the idea of using drawings as a means of computation yet presents many exciting and novel opportunities. This thesis explores some of such possibilities by presenting a series of novel programming languages: λ-2D, a two-dimensional grid-based lambd a calculus derivative that fuses diagrams with free-hand drawings; Nor-wires, a minimalistic, symbol-less language based on NOR gates where semantics are inferred from the topology of lines drawn alone; The Languages of Primitives, where the spatial relationships and inherent properties of fundamental shapes come in play to build programming constructs; as well as other experiments on form and animation that relates drawings to computation. The goal of these experiments is to create unusual, playful, and interesting interactions, to blur the boundaries between art and code, to open up possibilities and to inspire future programming language design, as well as to make computation more accessible Could it be possible that writing a computer program would be as simple as making a drawing?
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microfabrication and characterization of a new box-shaped high frequency (7.5 MHz) low aperture 2D phased ultrasound transducer array</title>
<link href="https://hdl.handle.net/1721.1/152718" rel="alternate"/>
<author>
<name>Shuvo, Ikra Iftekhar</name>
</author>
<id>https://hdl.handle.net/1721.1/152718</id>
<updated>2023-11-03T03:19:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Microfabrication and characterization of a new box-shaped high frequency (7.5 MHz) low aperture 2D phased ultrasound transducer array
Shuvo, Ikra Iftekhar
Mini electronics in wearables inspired this study. Thus, this work offers a new box-shaped high frequency (7.5 MHz) low aperture 2D phased sparse array ultrasonic transducer developed, built, and characterized. The capacity of matrix or 2D phased arrays to generate ultrasound beams without requiring any form of motion or mechanical steering holds potential value in the biomedical sonographic domain. However, these systems need a large number of piezoelectric elements to sample the active aperture, which is smaller than λ/2 wavelength between them, necessitating the need for a sizable or large transducer. To the best of knowledge, this is the first endeavor to design and microfabricate a 7.5 MHz transducer array, based on commercial PZT-5H polycrystalline materials, as tiny as 70x70 µm per transducer with a pitch of 102 µm to maintain an inter-element separation below 50% of the lambda. The study employs a square box-shaped structure that houses the transmitters and receivers perpendicular to each other, resulting in a reduced aperture and compact design compared to different commercial designs. This transducer not only provides satisfactory longitudinal k33 coefficient (0.45-0.5), acoustic pressure (2.1 kPa), sound pressure level (180 dB), low Q-factor (1.19), thermal stability, and high bandwidth (5.6 MHz, 73.41%), while minimizing cross-talk (&lt;-50 dB), but also reduces the overall transducer area due to its unique sparse array configuration, resulting in a diminutive size (3.3 mm x 3.3 mm).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Current State of the Commercial Real Estate Office Sector</title>
<link href="https://hdl.handle.net/1721.1/152716" rel="alternate"/>
<author>
<name>Dessalines, Nick</name>
</author>
<id>https://hdl.handle.net/1721.1/152716</id>
<updated>2023-11-03T03:05:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Current State of the Commercial Real Estate Office Sector
Dessalines, Nick
In January 2023, approximately 50% of Manhattan office workers were in the office on an average weekday; roughly 10% of the local workforce was fully remote; and only 9% of employees were in the office five days a week. These city-level trends are also reflected at the submarket, market, and national levels. As of 2023, 13% of full-time U.S employees work entirely from home, while 28% work a hybrid model. The Covid-19 pandemic’s impact on commercial real estate, particularly the office sector, is still being felt three years later. Due to the 2020 outbreak of the coronavirus, regulators worldwide implemented lockdowns, forcing employees to work remotely indefinitely. And to the surprise of many, this trend has continued unabated.&#13;
&#13;
The adaptation of the work-from-home model by a myriad of office real estate tenants caused a significant decline in office space demand. According to commercial real estate services firm, CBRE, the U.S. national office market reported 16.5 million sq. ft. of negative net absorption in Q1 2023 (the weakest quarter for office demand in two years), bringing overall vacancy up to 17.8%. The concept of remote working has long been criticized and rejected. The prevailing belief was that employees are simply not as motivated nor productive working from home as opposed to the office. Additionally, critics further argue that it is impossible to build and maintain a company office culture if your employees are not physically present in the office. Simply put: the remote work model was widely regarded and portrayed as a productivity and culture “killer”. The temporary lockdowns in 2020 however presented a unique (and forced) opportunity for those theories to be tested. Three years later, it’s safe to say that the paradigm of traditional workspaces has undergone a seismic shift thanks to the Covid-19 pandemic. The remote-work model's benefits and limitations have largely come to light, prompting employers and employees to respond accordingly.&#13;
&#13;
With an increasing number of companies cutting down their real estate footprints, rising vacancy rates, and plummeting valuations, what exactly does the future hold for the office sector? How are investors, landlords, and tenants affected? These are some of the questions that I look to address throughout this paper, for which I’ve interviewed three highly regarded and respected industry experts.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Workforce Practices &amp; Organizational Performance in Nursing Homes: Implications for Resident Health and COVID-19 Containment</title>
<link href="https://hdl.handle.net/1721.1/152715" rel="alternate"/>
<author>
<name>Scott, K. MacKenzie</name>
</author>
<id>https://hdl.handle.net/1721.1/152715</id>
<updated>2023-11-03T03:05:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Workforce Practices &amp; Organizational Performance in Nursing Homes: Implications for Resident Health and COVID-19 Containment
Scott, K. MacKenzie
One in three COVID-19 deaths in the United States occurred in a nursing home, raising questions about how nursing home facilities might improve organizational performance on resident health outcomes. Though researchers have linked workforce practices to organizational performance on patient health, it is less clear whether the predictors of organizational performance look different for pandemic infection, relative to other health conditions. To address this gap, this paper links workforce practices with both pre-pandemic resident health conditions and with COVID-19 outcomes. The analysis relies on multivariate and logistic regressions using two novel datasets that link multiple administrative sources before and during the pandemic. It evaluates how workforce practices such as pay, staff hours per resident, outsourcing, and overtime relate to resident health in both contexts. Whereas estimates show that workforce practices for Registered Nurses are the primary driver of resident health before the pandemic, outsourcing is more important to predicting COVID-19 infections and mortality. Specifically, outsourcing care work before the pandemic is associated with a one percentage point decrease in COVID-19 mortality during the crisis, conditional on at least one positive case in the facility. The findings call into question widely made extrapolations from pre-pandemic research on how workforce practices may help predict pandemic spread. By evaluating multiple workforce practices in one model, the findings inform nursing home management decisions in the interest of resident health.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tuning Into The Planet: Scientists are collecting and archiving soundscapes before they disappear</title>
<link href="https://hdl.handle.net/1721.1/152713" rel="alternate"/>
<author>
<name>Gamillo, Elizabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/152713</id>
<updated>2023-11-03T03:36:25Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tuning Into The Planet: Scientists are collecting and archiving soundscapes before they disappear
Gamillo, Elizabeth
Soundscapes of the world can reveal the status of an ecosystem to an ecologist, much like how a cardiologist can distinguish abnormal heart murmurs with an electrocardiogram. The effort to use the aural landscape to assess the recovery of fragile ecosystems and directly assist in those recoveries is becoming a movement within ecology. The audio recorder has become an increasingly powerful tool at a time when catastrophic heatwaves, wildfires, floods, and extreme weather increase in severity and occurrence. The soul-stirring calls of animals and the mechanical hum or the roar of cars and planes, all engulfed by the swift and rhythmic sounds of wind and water flow, generate a unique score that researchers collect to note unique rhythms and patterns in the cacophony of these landscapes. &#13;
 &#13;
Altered soundscapes are often the first detectable changes in an ecosystem facing threats. By strapping recorders onto trees or tripods, scientists can also track how ecosystems change in response to human disruptions, like air traffic and logging, or track biodiversity and shifts brought on by climate change. Collecting and archiving the baseline data of sounds that can be visited and studied, much like preserved specimens in a natural history museum, is crucial before they disappear or change forever. Together, these scientists are creating a record for the future. It's the sound of this moment, frozen forever as audio files. And they hope that someone can use it to travel back to the world as it was in this instant—and help preserve the ecosystems they love.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Consumer of Humans</title>
<link href="https://hdl.handle.net/1721.1/152712" rel="alternate"/>
<author>
<name>Tsann, Abdullahi</name>
</author>
<id>https://hdl.handle.net/1721.1/152712</id>
<updated>2023-11-03T03:04:51Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Consumer of Humans
Tsann, Abdullahi
For decades, a dearth of scientific research, inadequate treatment and diagnostic tools have slowed progress in the fight to control tuberculosis globally. Scientists have developed important drugs, such as isoniazid, rifampin, and pyrazinamide, against the disease. But these drugs must be taken for several months, are sometimes ineffective, and can cause debilitating side effects. What’s more, if people don’t finish their treatments, it can lead to multidrug-resistant tuberculosis (MDR-TB), a form of the disease that is resistant to two of the four common drugs against TB, or, even more worryingly, extensively drug-resistant tuberculosis (XDR-TB), a form of the disease against which broader anti-TB drugs are powerless. Now, advances in immunology, chemistry, and biomolecular engineering are helping scientists to gain better insight into the complex cellular processes of Mycobacterium tuberculosis and the disease it causes. This could pave the way for the development of innovative diagnostics, vaccines, and new treatments for these tuberculosis superbugs. This thesis examines why tuberculosis kill millions of people till this day and scientists’ best efforts alone can’t win the war.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Explaining Middle Power Military Intervention: Australia’s Use of Force in Maritime Southeast Asia and the South Pacific</title>
<link href="https://hdl.handle.net/1721.1/152711" rel="alternate"/>
<author>
<name>Ackert, Nicholas Wolf</name>
</author>
<id>https://hdl.handle.net/1721.1/152711</id>
<updated>2023-11-03T04:08:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Explaining Middle Power Military Intervention: Australia’s Use of Force in Maritime Southeast Asia and the South Pacific
Ackert, Nicholas Wolf
Why do middle powers use military force to intervene in external conflicts despite the costs and risk? While there is no shortage of literature on the causes of military intervention, most theories were derived from – and seek to explain – great power behavior. Given differences between great power and middle power material capabilities and interests, there are good reasons to anticipate that the logics of great power and middle power military intervention may not be the same. &#13;
&#13;
Understanding why middle powers use force to intervene in some external conflicts but not others is important. While the majority of military interventions are led by great powers, middle powers like Australia and South Africa have led a number of large and costly operations. Moreover, throughout the past decade, middle powers have exercised increasingly assertive and consequential efforts to either challenge or strengthen the normative, economic, and alliance-related elements of the current international order. Therefore, it behooves academics and policymakers to evaluate how middle powers frame their interests and to better understand the conditions under which they implement costly and risky policies – including the use of force – to pursue them. &#13;
&#13;
This thesis tests four competing theories of military intervention across a complete universe of post-Cold War cases associated with Australia, a state that most closely resembles the middle power ideal. Those theories are: (1) Military Intervention as Threat Response, (2) Military Intervention as a Socialized Behavior, (3) Military Intervention as Greed, and (4) Military Intervention as the Foreign Imposition of Domestic Institutions. I conclude that the theory of Military Intervention as a Socialized Behavior – which emphasizes the role of ideational incentives and defensive intentions – explains the greatest amount of variation in Australia’s behavior.&#13;
&#13;
I find that Australia intervened primarily to protect its self-image and status as a guarantor of regional security, which it had been socialized into adopting through over sixty years of security cooperation with the United States and its Pacific Island neighbors. Notably, Canberra intervened in East Timor, Papua New Guinea, and the Solomon Islands, where the outbreaks of violence were perceived as a direct consequence of Australia’s colonial and neocolonial behaviors. However, it did not intervene in Fiji, where there was no expectation that Canberra would act because the country was not labeled as a failed state and Australia had not been blamed for the unrest there. Other goals, such as deterring foreign interference and preventing the externalities of adjacent state collapse, had less influence than presumed. &#13;
&#13;
In the bigger picture, this thesis offers several contributions to the empirical and theoretical literature on military intervention and foreign policy. First, I develop an original framework for categorizing existing explanations about military intervention which facilitates easier – and replicable – comparison and testing of extant theories. Second, I demonstrate that, based on Australia's experiences, middle powers use force for reasons that differ from their great power counterparts. Thus, this project is a rejoinder to those who claim that middle powers are not differentiable from other non-great power states. Finally, I illustrate the inherent fragility of the middle power identity and reveal how easily it can be threatened by external shocks. &#13;
&#13;
Three implications, which are based largely on the Australian experience, follow. First, we should question the mainstream argument that military intervention cannot be explained by ideas and images. As states weigh the costs and benefits of intervening, potential gains and losses can refer to intersubjectively understood social facts – such as self-image, status, and credibility – as much as wealth and physical safety. Second, middle powers may be more likely to take costlier and riskier actions when their self-image and status are at stake. Finally, middle powers may find themselves caught in self-defeating cycles of intervention. The more a middle power intervenes to protect its self-image and status as a purveyor of regional security, the more that identity will solidify in its own mind and in the minds of other states, encouraging future interventions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparison of Natural Language Processing Models for Depression Detection in Chatbot Dialogues</title>
<link href="https://hdl.handle.net/1721.1/152710" rel="alternate"/>
<author>
<name>Belser, Christian Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152710</id>
<updated>2023-11-03T03:31:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Comparison of Natural Language Processing Models for Depression Detection in Chatbot Dialogues
Belser, Christian Alexander
Depression is an important challenge in the world today and a large source of disability. In the US, a recent study showed that approximately 36 million adults had at least one major depressive episode, including some with severe impairment [1]. However, approximately two-thirds of all depression cases are never diagnosed [2], largely due to a shortage of trained mental health professionals as well as a lingering cultural stigma that often prevents afflicted people from seeking professional care. In order to address this need, there is an emerging interest in using computer algorithms to automatically screen for depression, which offers the potential to be widely deployed to the public via clinical websites and mobile apps. Within this field, Dr. Fletcher’s group at MIT develops mobile platforms that are used to support mental health wellness and psychotherapy, including tools to screen for mental health disorders and refer people to treatment. As part of this work, this thesis compares three distinct Natural Language Processing (NLP) models used to screen for depression. I have revised and updated three state-of-the-art models: (1) Bi-directional gated recurrent unit (BGRU) models, (2) Hierarchical attention networks (HAN), and (3) Long-sequence Transformer models to accurately screen for depression in individuals. The models were all trained and tested on a common standard clinical dataset (DAICWoz) that is derived from clinical patient interviews. After optimization, and exploring several variants of each type of model, the following results were found: BGRU (accuracy=0.71, precision=0.65, recall=63, F1-score=0.64, MCC=0.20); HAN (accuracy= 0.77, precision=0.76, recall=0.77, F1-score=0.76, MCC=0.46); Transformer (accuracy=0.77, precision=0.76, recall=0.77, F1-score=0.76, MCC=0.43). In addition to model performance, I also compare the different categories of models based on computational resources and input token size. I also discuss the future evolution of these models and provide recommendations for specific use cases.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the quality and breadth of carbon reduction project ideation and roadmapping for corporations</title>
<link href="https://hdl.handle.net/1721.1/152709" rel="alternate"/>
<author>
<name>Tainter, Stephen M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152709</id>
<updated>2023-11-03T03:53:17Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Enhancing the quality and breadth of carbon reduction project ideation and roadmapping for corporations
Tainter, Stephen M.
This thesis seeks to demonstrate the benefits of a systems approach to the ideation and complexity analysis of carbon reduction projects (CRP) for heavy-industry corporations (HICs), which is critical for HICs to adapt to a lower-carbon world. The thesis uses a systems approach to improve the breadth of ideation and robustness of complexity analy-sis such that sustainability teams (STs) and technical framing teams (TFTs) have a more robust queue of CRPs and options for decarbonization pathways. A steamflood operation is used as an example to demonstrate the application of system architecting tools and is decomposed into its formal and functional components, which are then recombined to gen-erate an operand-process diagram (OPD). System boundaries are drawn within and around the OPD to ideate unique CRPs to reduce the emissions intensity for a steamflood oper-ation, ranging from tactical solutions to alternative recovery mechanisms. The range of solution-neutral concepts (SNCs) improves a HICs ability to brainstorm more "disruptive" architectures of their operations that will reduce the emissions intensity of their operations. The CRPs are then translated into a design structure matrix (DSM) and inputted into a change-propagation model to forecast how complexity differences enhance or hinder a sys-tem’s ability to adapt to future technological changes. The tools demonstrated in this thesis equip STs and TFTs with insights and comparative analysis to develop near-term solutions and redesign operations for the future. Overall, this research contributes to enhancing CRP ideation and operability of complex industrial systems in the future by applying systems architecting tools and principles.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Usability Study of Nomon: A Flexible Interface for Single-Switch Users</title>
<link href="https://hdl.handle.net/1721.1/152707" rel="alternate"/>
<author>
<name>Bonaker, Nicholas Ryan</name>
</author>
<id>https://hdl.handle.net/1721.1/152707</id>
<updated>2023-11-03T03:37:36Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A Usability Study of Nomon: A Flexible Interface for Single-Switch Users
Bonaker, Nicholas Ryan
Many individuals with severe motor impairments communicate via a single switch—which might be activated by a blink, facial movement, or puff of air. These switches are commonly used as input to scanning systems that allow selection from a 2D grid of options. Nomon is an alternative interface that provides a more flexible layout, not confined to a grid. Previous work suggests that, even when options appear in a grid, Nomon may be faster and easier to use than scanning systems. However, previous work primarily tested Nomon with non–motor-impaired individuals, and evaluation with potential end-users was limited to a single motor-impaired participant. We provide a usability study following seven participants with motor impairments and compare their performance with Nomon against a row-column scanning system. Most participants were faster with Nomon in a picture selection task, while entry rates varied more in a text-entry task. However, we found participants had to click more times per selection using Nomon, motivating future research into mitigating this increased click load. All but one participant preferred using Nomon; most reported it felt faster and had better predictive text.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Identification of Atomic Propositions in English Instructions for Flexible Translation to Robot Planning Representations</title>
<link href="https://hdl.handle.net/1721.1/152706" rel="alternate"/>
<author>
<name>Gandhi, Rujul</name>
</author>
<id>https://hdl.handle.net/1721.1/152706</id>
<updated>2023-11-03T03:00:58Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Identification of Atomic Propositions in English Instructions for Flexible Translation to Robot Planning Representations
Gandhi, Rujul
Creating human-interactive problem-solving robots involves interfacing natural-language instructions into formal representations. This formal representation should contain all the verifiable constituent units (ideally atomic propositions) which are present in the natural language instruction. However, the format and vocabulary of atomic propositions may vary substantially across formal representations and their application domains. Hence, extracting the correct atomic propositions from natural language has been a bottleneck in converting language to formal representations. In this thesis, we propose and implement a two-step method for identifying atomic propositions in a representation-agnostic way. Given an instruction in natural English, we first identify the spans of that instruction that may potentially be atomic propositions, and then carry out a finer-grained translation into the chosen formalization language. In evaluating this approach, we demonstrate the ability of the span identification method to generalize to two common domains of robot planning tasks, navigation and manipulation, as well as three additional domains of household robot tasks. Finally, we discuss, implement, and evaluate methods to incorporate span identification into the process of parsing English into three formal representations: Temporal Logic, PDDL, and a custom style of atomic propositions. Using pretrained language models and naturalistic parallel data, we build a system that enables flexible formalization of natural language across chosen intermediate representations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis, Design, and Evaluation of Hierarchical Switched-Capacitor Cell Voltage Balancers</title>
<link href="https://hdl.handle.net/1721.1/152705" rel="alternate"/>
<author>
<name>Negm, Ahmad H.</name>
</author>
<id>https://hdl.handle.net/1721.1/152705</id>
<updated>2023-11-03T03:57:10Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis, Design, and Evaluation of Hierarchical Switched-Capacitor Cell Voltage Balancers
Negm, Ahmad H.
With the increased utilization of and reliance on battery-powered and -storage systems, battery cell voltage balancers are crucial in providing additional capacity extraction and lifespan from battery packs. This thesis explores the analysis, design, and evaluation of a hierarchical switched-capacitor cell voltage balancer topology that employs canonical charge pump inverters as fundamental building blocks. The charge pump inverter implementation was first designed using a fully N-channel switch configuration and a mixed N- and P-channel switch configuration. This was tested at the 2S, 4S, 8S, and 32S battery configuration voltage levels for combined cell stack voltages up to 100V. Subsequently, a complete 4S multi-level implementation of the hierarchical topology was designed around distinct N- and P-channel switch configurations at the 2S and 4S levels. The control circuitry ran off a single external dual-supply by implementing discrete charge pump circuits as floating supplies for the gate drivers. Testing on 2.5Ah capacity and 0mΩ inner resistance emulated cells with 0.4V imbalance yielded a typical balance time under 20 min. Although the topology scales moderately poorly with respect to component count and stress, it excelled at edge-to-edge cell balancing. Overall, the work in this thesis demonstrates the proposed hierarchical balancer topology from 10s of Amps to 33.56A peak cell balance current, 10s of Watts to 1.67kW output power, and typically &gt;90% efficiency.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Throughput in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs</title>
<link href="https://hdl.handle.net/1721.1/152702" rel="alternate"/>
<author>
<name>Gowra, Vineeth</name>
</author>
<id>https://hdl.handle.net/1721.1/152702</id>
<updated>2023-11-03T04:07:02Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Optimization of Throughput in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs
Gowra, Vineeth
Sheet metal fabrication has become a fundamental process in modern engineering due to its versatility and is used across a wide range of industries. Nesting a given set of sheet metal blanks onto raw material sheets is a major cost driver as it determines the amount of usable metal and the rest of the sheet is thrown away as scrap. Nesting algorithms are very effective at identifying the most efficient layout of a given set of parts to maximize the sheet utilization. Hence, material utilization of the sheet is mainly defined by the number of parts being nested and their geometries. On one hand, nesting algorithms would prefer having a large number of grouped parts that allow them to make more efficient sheet metal nests due to more possible combinations of parts on a given sheet. On the other hand, the downstream sorting process which sends the parts to their respective further processing stations would prefer having fewer number of grouped parts as the parts get nested randomly which increases the time spent on the non value add activity. Therefore, an effective nesting strategy between the two extremes is necessary to balance the sheet utilization with the intensive sorting requirements to make the process cost effective and meet the required throughput. In this thesis, a sheet metal nesting strategy is identified for a manufacturing operation with a wide variety of products and plant locations across the globe. Cost and throughput models are produced which inform the selection of a globally optimized nesting strategy. Regional differences in cost drivers such as varying labor rates and raw material costs are considered, and an optimized nesting strategy is validated for deployment across global plant locations. This work provides a detailed approach to optimizing sheet utilization in sheet metal manufacturing through selection of an optimized nesting strategy.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Organizational Data Journey</title>
<link href="https://hdl.handle.net/1721.1/152701" rel="alternate"/>
<author>
<name>Papenfuss, Tanner</name>
</author>
<id>https://hdl.handle.net/1721.1/152701</id>
<updated>2023-11-03T04:03:21Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Organizational Data Journey
Papenfuss, Tanner
This thesis delves into a comprehensive analysis of an organization's data journey, exploring its various stages and critical aspects at each step. It serves as a tool for organizations embarking on their data journey. It can help organizations understand where they are and how to transition to the next stage. It delves into tools such as Excel and the transition to cloud-based solutions. It walks through the idea of data-centric and how organizations can evolve into data-driven enterprises. Each section identifies key actions and capabilities at each stage, guiding organizations in preparing themselves before transitioning to the next phase.&#13;
&#13;
The thesis culminates in a data journey workshop to jump-start organizations' transformation. A detailed plan for conducting the workshop is presented, including securing leadership commitment and outlining the workshop agenda. Additionally, a tactical 30-60-90 day plan is proposed, providing participants and leadership with actionable steps to drive data initiatives effectively. This plan acts as a compass, guiding organizations toward their data-driven objectives while fostering a culture of continuous improvement and innovation.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Machine Learning Methods for Automated Macro-Inspection and&#13;
Improved Defect Identification in Semiconductor Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/152700" rel="alternate"/>
<author>
<name>Cheung, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/152700</id>
<updated>2023-11-03T03:33:34Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Machine Learning Methods for Automated Macro-Inspection and&#13;
Improved Defect Identification in Semiconductor Manufacturing
Cheung, Sophia
This thesis proposes four methods to improve macro-inspection capability of defects on wafers at a semiconductor wafer fab. First, an investigation into the performance of current inspection tools is done, revealing results that are not reliable nor reproducible. Tool maintenance procedures and specification adjustments are recommended. Second, a software upgrade to the current inspection software is developed, including enhanced features that address pain points of reviewing wafer images. The image processing and loading time is reduced by over 50%. Third, three binary classification machine learning models are trained to isolate spin-on-glass defects, edge type defects, and center defects. Each of the models exhibits an area under curve (AUC) of over 0.90 on out-of-distribution test sets. Finally, a proof-of-concept for an in-line inspection system is designed and tested on the fab floor. New images from this system appear to be of sufficient quality for inspection. The results of each part of this study can be used to inform investment decisions required to move towards a more automated process.&#13;
&#13;
Relevant to the machine learning community are the methods developed to address class imbalance in neural network training. Methods for preparing data to be trained in a meaningful way such as spitting, transforming, and creating synthetic data are proposed. The effect of generating data in such a fashion is shown to be positive, increasing the AUC of the specified model by up to 65%.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Stochastic Sea Ice Modeling with the Dynamically Orthogonal Equations</title>
<link href="https://hdl.handle.net/1721.1/152699" rel="alternate"/>
<author>
<name>Suresh Babu, Anantha Narayanan</name>
</author>
<id>https://hdl.handle.net/1721.1/152699</id>
<updated>2023-11-03T03:56:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Stochastic Sea Ice Modeling with the Dynamically Orthogonal Equations
Suresh Babu, Anantha Narayanan
Accurate numerical models are essential to predict the complex evolution of rapidly changing sea ice conditions and study impacts on climate and navigation. However, sea ice models contain uncertainties associated with initial conditions and forcing (wind, ocean), as well as with parameter values, functional forms of the constitutive relations, and state variables themselves, all of which limit predictive capabilities. Due to the multiple types and scales of sea ice and the complex nonlinear mechanics and high dimensionality of differential equations, efficient ocean and sea ice probabilistic modeling, Bayesian inversion, and machine learning are challenging. In this work, we implement a deterministic 2D viscoplastic sea ice solver and derive and implement new sea ice probabilistic models based on the dynamically orthogonal (DO) equations.&#13;
&#13;
We focus on the stochastic two-dimensional sea ice momentum equations with nonlinear viscoplastic constitutive law. We first implement and verify a deterministic 2D viscoplastic sea ice solver. Next, we derive the new stochastic Sea Ice Dynamically Orthogonal equations and develop numerical schemes for their solution. These equations and schemes preserve nonlinearities in the underlying spatiotemporal dynamics and evolve the non-Gaussianity of the statistics. We evaluate and illustrate the new stochastic sea ice modeling and schemes using idealized stochastic test cases. We employ two stochastic test cases with different types of sea ice: ice sheets and frozen ice cover with uncertain initial velocities. We showcase the ability to evolve non-Gaussian statistics and capture complex nonlinear dynamics efficiently. We study the convergence to the physical discretization, and stochastic convergence to the stochastic subspace size and coefficient samples. Finally, we assess and show significant computational and memory efficiency compared to the direct Monte Carlo method.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Methodology for Pyrolysis-induced Thermal Runaway&#13;
Analysis in Li-ion Batteries</title>
<link href="https://hdl.handle.net/1721.1/152698" rel="alternate"/>
<author>
<name>Ramadan, Mahmoud M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152698</id>
<updated>2023-11-03T03:51:23Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Methodology for Pyrolysis-induced Thermal Runaway&#13;
Analysis in Li-ion Batteries
Ramadan, Mahmoud M.
As the adoption of lithium-ion batteries (LIBs) grows due to the demand for highenergy density storage solutions, ensuring their safety becomes paramount. Thermogravimetric Analysis (TGA) and Differential Scanning Calorimetry (DSC), which have traditionally been tools in polymer thermal analysis since the 1950s, have seen increasing use in LIB thermal research in recent decades. However, applying these techniques to LIBs poses challenges due to the multifaceted composition of LIBs and its sensitivity to environmental conditions. This research aims to overcome the inherent limitations of TGA and DSC when applied to LIBs by introducing a robust, standardized experimental protocol to ensure accuracy and consistency. Employing TGA and DSC concurrently and using sealed crucibles with pinholes, we present a comprehensive thermal profile of next-generation LiFSI-based electrolytes, revealing behaviors that differ based on solvent choice. Our analysis discerned distinct thermal properties between LiFSI-carbonate and LiFSI-ether electrolytes. Specifically, carbonate-based electrolytes displayed a pronounced exothermic peak at 350°C, indicative of significant decomposition reactions. In contrast, the LiFSI-ether electrolyte exhibited an exothermic reaction at 210°C, followed by an endothermic event near 300°C. Such variances in thermal behavior emphasize the profound influence of solvent selection on the thermal profiles of electrolyte solutions. A techno-economic assesment on sodium-ion batteries is also presented.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Triad Interactions among Surface Waves Propagating through an Ice Sheet</title>
<link href="https://hdl.handle.net/1721.1/152697" rel="alternate"/>
<author>
<name>Pierce, Max W.</name>
</author>
<id>https://hdl.handle.net/1721.1/152697</id>
<updated>2023-11-03T03:40:25Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Triad Interactions among Surface Waves Propagating through an Ice Sheet
Pierce, Max W.
We study nonlinear resonant wave-wave interactions which occur when ocean waves propagate into a thin floating ice sheet. Using multiple-scale perturbation analysis verified against regular perturbation for short distances past the ice edge, we obtain theoretical predictions of the wave amplitude evolution as a function of distance travelled past the ice edge for a semi-infinite ice sheet. We relate the amplitude evolution to ice bending strain, related to ice breakup. We show that, due to sum-frequency interactions, the maximum strain in the ice sheet can be more than twice that predicted by linearized theory. We further demonstrate that difference-frequency interactions also can result in a moderate strain increase compared to the linear result despite transferring energy to longer wave components. This work has implications to understanding the occurrence of ice breakup and the resulting ice floe size distribution.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fugitive Spaces For Cultivating Creativity: A Framework For Value-Centered Learning Environments</title>
<link href="https://hdl.handle.net/1721.1/152690" rel="alternate"/>
<author>
<name>Sadler, Cecilé</name>
</author>
<id>https://hdl.handle.net/1721.1/152690</id>
<updated>2023-11-03T03:42:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Fugitive Spaces For Cultivating Creativity: A Framework For Value-Centered Learning Environments
Sadler, Cecilé
In today's world, there is a growing recognition of the importance of creativity and curiosity as essential 21st-century skills. Efforts to expand access to technology-mediated creative learning experiences have become widespread. However, oppressive structures in education systems hinder and stifle the participation of all young people in these opportunities for self-expression and agency, particularly Black youth who are negatively impacted by the pervasiveness of anti-Blackness.&#13;
&#13;
This thesis proposes a framework for value-centered learning environments. Through the application of BlackCrit’s theory of anti-Blackness as a lens for analyzing experiences, the holistic design of these learning environments aims to cultivate a sense of fugitive space –purposefully constructed out-of-school spaces for practicing radical imagination. These spaces offer transformative creative learning experiences with computing, challenging anti-Blackness in education and embracing liberatory fantasy. The learning environment encompasses the tools and materials, physical space, pedagogy, and community culture. The core values – accountability, authenticity, awareness, and adaptability – are put into practice by: doing the work, showing up, checking in, and embodying change.&#13;
&#13;
The exploration is conducted in the context of a local community-based grassroots organization, blackyard, that centers Black youth and their families by offering after-school programming. Through design-based approaches and critical inquiry, creative learning workshops, interviews, and immersion are employed to explore the values that underpin ways of being, knowing, and acting, fostering creative and playful learning experiences for Black youth. By centering Black youth and their communities, this thesis opens a dialogue, explores tensions, and encourages persistent dreaming about opportunities that center the humanity and dignity of the learner, celebrate curiosity and imagination, and challenge oppressive narratives in education about who is worthy and capable of the rights to creativity and play.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Peripheral: Re-Thinking the Organized Industrial Zone</title>
<link href="https://hdl.handle.net/1721.1/152688" rel="alternate"/>
<author>
<name>Sahin, Selin</name>
</author>
<id>https://hdl.handle.net/1721.1/152688</id>
<updated>2023-11-03T03:08:21Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Non-Peripheral: Re-Thinking the Organized Industrial Zone
Sahin, Selin
The Organized Industrial Zone (OIZ) in Turkey emerged in the 1960s as a primary model for industrial development. Combining base infrastructure, specific land use rules, financial incentives&#13;
such as tax exemptions, and partial self-governance, the OIZ is essentially a cluster of light to medium industries. It was meant to facilitate capital flows and later became seen as a tool to&#13;
drive urbanization. This thesis examines the development, consequences, and future prospects of OIZs. It traces their origins from a Checci and Company report, based on the industrial estates, commissioned through an agreement between the governments of Turkey and the United States, to their rapid proliferation in the 21st century.&#13;
&#13;
Today, there are over three hundred individual sites scattered across the country. Varying in size and complexity, these zones are born from policies and regulations that have barely changed&#13;
since the late 1960s. As the OIZ model stands today, both at the policy level and in practice, there is a multitude of issues related to their internal organization, urban planning, impacts on the environment, regional disparities, and social equity. In exploring the evolving relationships between OIZs and the urban texture, sped up by expanding boundaries and changing paradigms in the industry, I submit that the design of OIZs should not be peripheral in our thinking.&#13;
&#13;
Selecting a particular site that is exemplary of the spatial conditions of many OIZs, I propose design interventions to address current problems and speculate the future of these zones. The components of the proposal factor in the city, within a material ecologies awareness. Through these proposals, this thesis aims to spatialize some of the hopes and narratives about these zones in the political consciousness and offer new urban visions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Developer and Lending Risk Associated with Offsite Construction</title>
<link href="https://hdl.handle.net/1721.1/152687" rel="alternate"/>
<author>
<name>Coen Jr., William</name>
</author>
<id>https://hdl.handle.net/1721.1/152687</id>
<updated>2023-11-03T03:43:05Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Understanding Developer and Lending Risk Associated with Offsite Construction
Coen Jr., William
This thesis investigates the potential of offsite construction as an effective alternative to traditional onsite methods in the construction industry. Targeting real estate professionals, financers, developers, construction contractors, and architects, the research aims to foster confidence and awareness in offsite techniques, specifically among project lenders. Through a combination of a literature review, interviews, workshop attendance, and site visits, the study addresses three critical research questions. First, it quantifies the project finance risk profile of offsite construction compared to traditional methods. Second, it identifies the qualitative determinants that influence lending decisions for offsite projects. Finally, it explores the data and education required for the finance industry to gain confidence in offsite construction's risk profile. The findings highlight the importance of incorporating modular offsite methods into educational curricula to create a cultural shift among industry professionals. This cultural shift can dispel misconceptions about offsite construction's quality, durability, and visual appearance, ultimately encouraging wider adoption. Moreover, lenders must conduct thorough personal due diligence when financing offsite projects, as manufacturing requires significant capital early in the timeline. Understanding the financial wherewithal of offsite manufacturers and assessing their experience in completing similar projects is crucial for mitigating risks. To facilitate offsite construction financing, industry leaders should explore innovative contractual, legal, and financial instruments. Implementing recourse provisions and enabling working capital financing for offsite manufacturers can alleviate the financial burden on developers. The Uniform Commercial Code approach could also make offsite projects more appealing to traditional lenders, enhancing their security interests during fabrication. Integrating these solutions can support and facilitate financing for offsite projects, driving increased efficiency, sustainability, and effectiveness in building practices. Overall, this thesis provides valuable insights into offsite construction, offering a comprehensive understanding of its benefits and challenges. By disseminating these findings to the target audience, the research aims to promote the widespread adoption of offsite construction and pave the way for a more innovative and sustainable future in the construction industry.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Acquirer Abnormal Returns in Listed European Real Estate M&amp;A Transactions</title>
<link href="https://hdl.handle.net/1721.1/152685" rel="alternate"/>
<author>
<name>Reimer, Clemens</name>
</author>
<id>https://hdl.handle.net/1721.1/152685</id>
<updated>2023-11-03T03:45:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Analysis of Acquirer Abnormal Returns in Listed European Real Estate M&amp;A Transactions
Reimer, Clemens
This thesis analyzes a sample of 70 listed European real estate M&amp;A transactions between June 2013 and June 2023. The analysis is based on three filters: target country, real estate subsegment, and payment structure. The findings reveal significant discrepancies in bid premiums compared to NAV across subsegments, with industrial segment transactions exhibiting a significant average premium of 46% and retail segment transactions occurring at an average discount of 13% to NAV. Additionally, the study finds that cash offers in the sample have higher bid premiums on average than share offers, albeit lower than the premiums in mixed payment offers. By using event study methodology, a sub-sample of 27 transactions is examined to analyze acquirer abnormal returns across multiple event windows. Consistent with prior research, the study demonstrates minor and statistically insignificant impacts on bidders’ shareholder returns. Notably, an intriguing pattern emerged when grouping the sub-sample by payment method. For the [-5/+5] and [-10/+10] event windows, transactions financed with all-cash exhibited higher cumulative average abnormal returns (CAARs) compared to all-share transactions. However, for the [-1/+1] event window, the difference between all-share and all-cash offers was relatively narrow, with slightly higher returns observed for share offers. An additional finding was that for the [-10/+10] event window, combination offers, involving both cash and shares, experienced significantly greater abnormal returns than other offer types.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visible-Light Integrated Photonics for 3D-Printing and Trapped-Ion Systems</title>
<link href="https://hdl.handle.net/1721.1/152677" rel="alternate"/>
<author>
<name>Corsetti, Sabrina M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152677</id>
<updated>2023-11-03T04:09:47Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Visible-Light Integrated Photonics for 3D-Printing and Trapped-Ion Systems
Corsetti, Sabrina M.
Silicon photonics has enabled next-generation optical technologies that have facilitated revolutionary advances for numerous fields spanning science and engineering, including computing, communications, sensing, and quantum engineering. In recent years, the advent of visible-light integrated photonics platforms has opened up the potential for further diverse applications. This thesis builds upon these recent technologies to demonstrate novel applications of visible-light integrated photonics.&#13;
&#13;
First, we combine the fields of silicon photonics and photochemistry to propose the first chip-based 3D printer, consisting of only a single millimeter-scale photonic chip without any moving parts that emits reconfigurable visible-light holograms up into a simple stationary resin well to enable non-mechanical volumetric 3D printing. This work presents a highly-compact, portable, and low-cost solution for the next generation of 3D printers.&#13;
&#13;
Next, we propose integrated-photonics-based system architectures and the design of key integrated-photonics components for both polarization-gradient and electromagnetically-induced-transparency cooling of trapped ions. Further, we experimentally demonstrate a pair of polarization-diverse gratings and design the first integrated polarization rotators and splitters at blue wavelengths, representing a fundamental stepping stone on the path to advanced operations for integrated-photonics-based trapped-ion quantum systems involving multiple polarizations.&#13;
&#13;
Finally, we demonstrate optical trapping and tweezing of microspheres and cancer cells using an integrated optical phased array for the first time, representing a two-orders-of-magnitude increase in the standoff distance of integrated optical tweezers and the first cell experiments using single-beam integrated optical tweezers.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Data Attribution: From Classifiers to Generative Models</title>
<link href="https://hdl.handle.net/1721.1/152676" rel="alternate"/>
<author>
<name>Georgiev, Kristian</name>
</author>
<id>https://hdl.handle.net/1721.1/152676</id>
<updated>2023-11-03T04:08:26Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Data Attribution: From Classifiers to Generative Models
Georgiev, Kristian
The goal of data attribution is to trace model predictions back to training data. Despite a long line of work towards this goal, existing approaches to data attribution tend to force users to choose between computational tractability and efficacy. That is, computationally tractable methods can struggle with accurately attributing model predictions in non-convex settings (e.g., in the context of deep neural networks), while methods that are effective in such regimes require training thousands of models, which makes them impractical for large models or datasets. Moreover, existing methods are often tailored to the supervised learning setting, and are not well-defined for generative models.&#13;
&#13;
In this thesis, we introduce TRAK (Tracing with the Randomly-projected After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differentiable models. In particular, by leveraging only a handful of trained models, TRAK can match the performance of attribution methods that require training thousands of models. We first demonstrate the utility of TRAK across various modalities and scales in the supervised setting: image classifiers trained on ImageNet, vision-language models (CLIP), and language models (BERT and mT5). Then, we extend TRAK to the generative setting, and show that it can be used to attribute different classes of diffusion models (DDPMs and LDMs).
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Hidden Network: addressing digital equity through meaningful connectivity in urban India</title>
<link href="https://hdl.handle.net/1721.1/152674" rel="alternate"/>
<author>
<name>Agrawal, Surbhi</name>
</author>
<id>https://hdl.handle.net/1721.1/152674</id>
<updated>2023-11-03T03:30:11Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Hidden Network: addressing digital equity through meaningful connectivity in urban India
Agrawal, Surbhi
This research explores the transformative impact of digital technologies, particularly mobile internet access, on digital equity in informal settlements in urban India. It investigates the nationwide expansion of 4G LTE infrastructure, driven by Jio's cost-effective high-speed data telecom revolution, which led to a shift towards smartphone-first internet access across diverse socio-economic classes. A market analysis demonstrates the rise of digital applications, enabling financial transactions, e-commerce, and service deliveries. Additionally, the study investigates internet activity patterns in New Delhi, revealing that infrastructure and connectivity are more significant predictors of digital equity than literacy rates. Notably, the research highlights the pivotal role played by Civil Society Organizations (CSOs) in promoting digital equity through initiatives in these urban informal settlements, emphasizing the significance of community engagement and technology-awareness efforts. It centers on the human aspect of technology, utilizing a smartphone-friendly website media to communicate research findings in an accessible format. The research seeks to empower residents, enhance digital inclusion, and bridge the digital divide through community-centric interventions. The central research question guiding this work is to identify key determinants and barriers in achieving digital equity for marginalized communities in urban informal settlements and explore effective strategies to bridge the digital divide for their empowerment and socioeconomic upliftment.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power Failure Cascade Prediction using Machine Learning</title>
<link href="https://hdl.handle.net/1721.1/152673" rel="alternate"/>
<author>
<name>Chadaga, Sathwik P.</name>
</author>
<id>https://hdl.handle.net/1721.1/152673</id>
<updated>2023-11-03T03:01:03Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Power Failure Cascade Prediction using Machine Learning
Chadaga, Sathwik P.
We consider the problem of predicting power failure cascades due to branch failures. We propose several flow-free models using machine learning techniques like support vector machines, naive Bayes classifiers, and logistic regression. These models predict the grid states at every generation of a cascade process given the initial contingency. Further, we also propose a model based on graph neural networks (GNNs) that predicts cascades from the initial contingency and power injection values. We train the proposed models using a cascade sequence data pool generated from simulations. We then evaluate our models at various levels of granularity. We present several error metrics that gauge the models’ ability to predict the failure size, the final grid state, and the failure time steps of each branch within the cascade. We benchmark the proposed models against the influence model proposed in the literature. We show that the proposed machine learning models outperform the influence models under every metric. We also show that the graph neural network model, in addition to being generic over randomly scaled power injection values, outperforms multiple influence models that are built specifically for their corresponding loading profiles. Finally, we show that the proposed models reduce the computational time by almost two orders of magnitude.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tracking Sargassum in the Caribbean: The Design, Deployment, and Validation of a Low-Cost Surface Drifter</title>
<link href="https://hdl.handle.net/1721.1/152672" rel="alternate"/>
<author>
<name>Pixa, Chase R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152672</id>
<updated>2023-11-03T03:31:18Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Tracking Sargassum in the Caribbean: The Design, Deployment, and Validation of a Low-Cost Surface Drifter
Pixa, Chase R.
This thesis presents the development of a low-cost surface drifter designed to track and monitor the abundant Sargassum seaweed in the Caribbean. The phenomenon of the Great Atlantic Sargassum Belt (GASB), inundating coastlines in the northern equatorial Atlantic and Gulf of Mexico, has raised concerns due to its negative impacts on marine ecosystems, coastal communities, and tourism. The introduction section provides background information on the arrival of Sargassum in the Caribbean and its ecological significance.&#13;
&#13;
One of the key motivations behind the drifter's development is the potential use of Sargassum as a feedstock for biofuel production. A comprehensive literature review assesses the feasibility of utilizing Sargassum for biofuels, taking into account infrastructure, economics, and scientific challenges. Although Sargassum holds promise as a renewable biomass source, several hurdles must be addressed, including consistent biomass production, processing techniques, and lack of industrial-scale biofuel plants using macroalgae.&#13;
&#13;
The core of the thesis is dedicated to the surface drifter development and field trials. Iterative trials are conducted to design a drifter that entangles with Sargassum, providing in situ movement data to complement remote sensing and modeling efforts. The drifter's design is optimized to mimic Sargassum rafts, and successful deployments off the coast of Puerto Rico demonstrate the potential for effective tracking. The drifter's association with Sargassum rafts is validated through satellite imagery and wind and current data.&#13;
&#13;
In parallel, a low-cost chemical sensing drifter is introduced in the thesis. This advanced drifter iteration incorporates self-validation mechanisms for Sargassum entanglement and enables the measurement of dissolved gases. The chemical sensing capabilities enhance the understanding of Sargassum rafts' dynamics and their environmental impact.&#13;
&#13;
The thesis concludes by summarizing the key findings and implications of the research. The low-cost surface drifters have shown promising potential for tracking Sargassum and studying its movement patterns within the GASB. The drifter's effectiveness in entangling with Sargassum provides valuable insights into the seaweed's behavior and could help improve existing remote sensing and modeling techniques.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling and Optimization of Tunable Insulated Hybrid Cooling to Extend Food Shelf-life Using Scalable and Affordable Materials</title>
<link href="https://hdl.handle.net/1721.1/152669" rel="alternate"/>
<author>
<name>Ko, Young</name>
</author>
<id>https://hdl.handle.net/1721.1/152669</id>
<updated>2023-11-03T03:45:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling and Optimization of Tunable Insulated Hybrid Cooling to Extend Food Shelf-life Using Scalable and Affordable Materials
Ko, Young
Cooling is a pivotal technology for tackling the global food crisis by securing the food shelf-life. More than 15% of food loss is due to improper food storage temperatures in developing countries. Every 10℃ temperature drop can increase food shelf-life by 2-3 times. However, conventional electrically powered cooling technology, such as vapor-compression refrigerators, is readily unavailable in developing countries. To this end, passive cooling, including radiative and evaporative cooling, is a promising solution that enables daytime sub-ambient temperature without any power requirement. However, radiative and evaporative cooling are constrained by low energy density, climate conditions, and significant water consumption. &#13;
&#13;
In this work, we combine evaporative and radiative cooling into tunable insulated hybrid cooling (TIHC) that can deliver low-cost, high-performance passive cooling for post-harvest foods. TIHC comprises three functional layers: hydrophobic porous membrane for radiative cooling, polyacrylamide hydrogel for evaporative cooling, and aluminum sheet as a substrate. Instead of the state-of-art radiative cooling material that requires complex and expensive fabrication, TIHC leverages commercially mass-produced hydrophobic porous membranes to achieve optical selectivity. In addition, an air gap between the membrane and the hydrogel insulates the cooling structure to reduce environmental heat gain. Concurrently, the air gap provides tunability to optimize the cooling performance by modifying the insulation thickness. &#13;
&#13;
Based on the TIHC concept, we implement a one-dimensional heat and mass transfer model to predict the cooling performance. We accelerate the computation time by a physics-based approximation of radiative heat transfer, which decouples it from conductive heat transfer. As a result, the model can simulate cooling performance for diverse design parameters, including optical properties of the hydrophobic porous membrane, air gap, polyacrylamide hydrogel, and storage free-space thickness. Next, we fabricate a surface-level TIHC cooler and characterize its cooling performance through outdoor cooling experiments. Cooling power and stagnation temperature difference for three different hydrophobic porous membranes (polyethylene film, polypropylene film, and polytetrafluoroethylene film) are measured and compared to the model predictions. We obtain a good agreement between the experimental data and the model predictions. However, the cooling performance of all tested hydrophobic porous membranes is inferior to pure evaporative cooling due to insufficient solar reflectance. Nevertheless, the TIHC cooler achieved an 81.6% of temperature drop with 48.8% less water consumption compared to the pure evaporative cooler throughout the day. &#13;
&#13;
Finally, we propose several optimization strategies to improve the TIHC cooling performance. We identify desired optical properties of the hydrophobic porous membrane to outperform pure evaporative cooling. We also discover a trend-shifting of cooling power to air gap thickness depending on the food temperature, from which optimal air gap thicknesses to maximize cooling power are determined. Eventually, we simulate the transient food temperature profile and quantitatively predict the food-shelf life under real-time weather conditions. The simple design of TIHC food storage is expected to improve the shelf-life of red tomatoes by up to 231.7% in Kenya, Nairobi. Given its compact structure, low-cost scalable materials, and tunable cooling performance, we expect the successful deployment of TIHC food storage can bring promising benefits to farmers, businesses, and households in developing countries.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Computational Methods for Biomedical Imaging</title>
<link href="https://hdl.handle.net/1721.1/152668" rel="alternate"/>
<author>
<name>Gerlach, Connor Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/152668</id>
<updated>2023-11-03T03:08:04Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Computational Methods for Biomedical Imaging
Gerlach, Connor Michael
This work aims to survey and advance the state of the art in methods for biomedical imaging and disease diagnosis. We demonstrate the generation of non-diffracting beams using Lee Holography, and argue that the rich depth information made possible by these beams is well-suited for machine learning applications, where 2D images can contain 3D contextual information without the added computational overhead of performing 3D convolutions. We begin with a review of important non-diffracting beams in the existing literature, and proceed to discuss the necessary experimental design for their generation. We then demonstrate the experimental generation of these beams, including the novel generation of a rotating beam and needle beam via Lee holography. This is followed by the presentation and analysis of a particular semi-supervised machine learning method, contrastive learning, and a novel demonstration of how transfer learning can further improve the representations made by contrastive learning.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>RL-SAR: A Robotic System for Fine-Grained RFID&#13;
Localization using RL-based Synthetic Aperture&#13;
Radar</title>
<link href="https://hdl.handle.net/1721.1/152667" rel="alternate"/>
<author>
<name>Chen, Weitung</name>
</author>
<id>https://hdl.handle.net/1721.1/152667</id>
<updated>2023-11-03T03:18:29Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">RL-SAR: A Robotic System for Fine-Grained RFID&#13;
Localization using RL-based Synthetic Aperture&#13;
Radar
Chen, Weitung
Efficient localization of RFID-tagged items is crucial in scenarios that require tracking and managing a large inventory. Current systems for fine-grained RFID localization have shown limitations since they only collect measurements on a pre-defined trajectory or optimize measurement locations for a single tag. Thus, there is a need for an RFID localization system that can autonomously optimize for multiple tags and adaptively relocalize tags with lower confidence to achieve a more precise and efficient localization.&#13;
&#13;
We introduce RL-SAR, an end-to-end autonomous Synthetic Aperture Radar (SAR) based RFID localization system, utilizing a Reinforcement Learning (RL) algorithm to determine the most optimal trajectory for localizing multiple tags. We implemented this system with an antenna moving on a ceiling-mounted 2D track. The core of the system is a RL-based trajectory optimization algorithm for collecting RF measurements. Based on these RF measurements, we developed a data processing pipeline to compute the estimated tag locations along with their confidence metrics, derived from the RF SAR hologram. The RL algorithm leverages confidence metrics associated with the tags and is capable of learning a strategy that minimizes the antenna’s traveled distance while enhancing the localization accuracy.&#13;
&#13;
We built and evaluated a proof-of-concept prototype of RL-SAR. Experimental evaluation demonstrates a mean 3D localization accuracy of 0.244m and the capability to locate 15 tags within an average scanning distance of 19.14 m. We compared our algorithm to naive baselines and show that the baselines require 86% longer trajectory than RL-SAR. Our results show the potential for achieving robust and efficient localization to enhance the current inventory processes across the manufacturing, retail, and logistics sectors.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Chemistry in a new key: Surplus, soy, and the history of sustainable enterprise in the United States, 1934-1950</title>
<link href="https://hdl.handle.net/1721.1/152666" rel="alternate"/>
<author>
<name>La Rock, Zachary</name>
</author>
<id>https://hdl.handle.net/1721.1/152666</id>
<updated>2023-11-03T03:22:06Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Chemistry in a new key: Surplus, soy, and the history of sustainable enterprise in the United States, 1934-1950
La Rock, Zachary
In the mid-1930s, a prominent group of industrialists, politicians, and farmers in the United States rallied around chemurgy, an emergent field of applied chemistry that sought to transform post-World War I agricultural surplus into industrial commodities. Ephemeral but wide-ranging in its scope, the “chemical revolution” that chemurgy’s proponents envisioned was a promise that the ends of agriculture and those of industry might be hybridized if the former dedicated itself to the cultivation of plant-based chemical compounds for the latter’s manipulation. In so doing, chemurgy became, in the eyes of its advocates, something of a panacea: for raw material scarcity, for Dust Bowl land degradation, and for underemployment caused by the Great Depression and racial segregation after Civil War Reconstruction. Under the banner of this hard-to-pronounce neologism, automaker Henry Ford and soil scientist George Washington Carver united in unlikely friendship and a quest to find new industrial applications for already existing plants, especially the soybean.&#13;
&#13;
Historicizing the futures that chemurgy’s allies, especially Ford and Carver, advocated, two distinct versions of the field emerge. Ford’s chemurgy entailed autarchic, unregulated mass production of single crops that linked farms, factories, and a white American workforce ever more closely as they worked to harvest profits for captains of industry. That of Carver, meanwhile, privileged the diversification of arable land and self-maintenance of a black base of growers in a context marked by  land dispossession and accumulation under racial capitalism. Almost a century since chemurgy was coined, it is worth revisiting this long-forgotten movement as a progenitor of contemporary calls that processes of industrial production be low-waste, renewable, even “green.” The tensions internal to this modernist doctrine of scientific praxis, which anchored innovation firmly in the soil, situate a North American genealogy of the logics by which today’s industries of sustainable enterprise replicate ecological and economic inequities of the past.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling, Analysis, and Design of&#13;
Switched-Capacitor Battery Cell Balancers</title>
<link href="https://hdl.handle.net/1721.1/152665" rel="alternate"/>
<author>
<name>Lopez, Mario A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152665</id>
<updated>2023-11-03T03:02:56Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Modeling, Analysis, and Design of&#13;
Switched-Capacitor Battery Cell Balancers
Lopez, Mario A.
Battery systems have become crucial components in many modern technological solutions. Battery balancers are among the most important parts of these systems because they play a significant role in the battery’s lifespan and performance. A novel capacitive-based balancer was designed and tested for two cell and four cell batteries. The key parameters that were optimized are efficiency, balancing time, volume, and cost. A theoretical model of the circuit was derived to guide design optimization. Additionally, simulations were created to predict performance. Custom printed circuit boards were developed and tested.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Patches As Agents</title>
<link href="https://hdl.handle.net/1721.1/152663" rel="alternate"/>
<author>
<name>Zheng, Winnie</name>
</author>
<id>https://hdl.handle.net/1721.1/152663</id>
<updated>2023-11-03T03:03:32Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Patches As Agents
Zheng, Winnie
StarLogo Nova is a powerful agent-based game and simulation programming environment designed for classroom learning. This modeling tool allows students to apply their creativity and replicate real-world phenomena through game-like coding and scientific knowledge. However, the current agent-to-environment interaction is limited, with the environment(terrain) possibly affected by the agent’s action, but unable to initiate actions autonomously. This thesis proposes an innovative approach to Star- Logo Nova by transforming the static environment into a dynamic structure made of thousands of patch agents. In this new framework, the traditional static terrain is now composed of a collection of independent patch agents, each with its own behaviors and traits. This transformation enables the patch agents to actively interact with each other and the surrounding agents, expanding potential for a more complex and responsive simulation.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Reinforcement-Learning-based Robot Navigation with 3D Scene Graphs</title>
<link href="https://hdl.handle.net/1721.1/152662" rel="alternate"/>
<author>
<name>Muriga, Veronica</name>
</author>
<id>https://hdl.handle.net/1721.1/152662</id>
<updated>2023-11-03T03:21:27Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Towards Reinforcement-Learning-based Robot Navigation with 3D Scene Graphs
Muriga, Veronica
Applying Reinforcement Learning (RL) for autonomous navigation has enormous potential in several robotics applications, including search and rescue operations. RL circumvents the need to manually specify a control policy for navigation and allows capturing aspects that are difficult to describe without relying on learning, e.g., that survivors or objects of interest are more likely to be found in specific regions of the environment. This is relevant for navigation policies guiding autonomous exploration and object search. To improve the performance of RL models guiding autonomous agents, we use 3D Scene Graphs (3DSGs) as a map representation. Previous work has shown that RL policies based on offline 3DSGs produce promising results in simulation, and this work takes initial steps towards extending these findings to 3DSGs produced online by Hydra, a new spatial perception system that builds 3DSGs in real-time. The work also provides an initial integration of the RL policies previously trained and evaluated in simulation [1] on a Unitree A1 quadruped robot. While the results are too preliminary to be conclusive, the thesis takes several integration steps towards deploying scene-graph-based RL policies for navigation on real robots.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Storytelling Entrepreneur Has No Clothes: Risks and&#13;
Rewards of Narrative Pitching</title>
<link href="https://hdl.handle.net/1721.1/152659" rel="alternate"/>
<author>
<name>Turner, Bradley</name>
</author>
<id>https://hdl.handle.net/1721.1/152659</id>
<updated>2023-11-03T03:03:39Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">The Storytelling Entrepreneur Has No Clothes: Risks and&#13;
Rewards of Narrative Pitching
Turner, Bradley
Scholars have shown that individuals and organizations can tell resonant stories, or narratives, to persuade audiences to evaluate them more highly. Yet other research has cast doubt on the persuasive power of storytelling, particularly for professional audiences like investors. To explore the conditions of narrative persuasion, I qualitatively study the practice of storytelling by entrepreneurs and responses by angel investors. By coding a representative sample of 330 pitches on the reality show Shark Tank, I produce the first catalogue of startup stories in pitches, focusing on character, tropes, temporalities, and shapes. This catalog illustrates isomorphism in pitching and the targets of narrative claims (e.g., the entrepreneur more than the market opportunity). While all Shark Tank entrepreneurs narrate answers to investors questions, only a third tell a story in the elevator pitch. Even when investors acknowledge such stories as high-quality, coherent, and resonant, they may still discount, dismiss, or counter them, instead demanding data, fact, and “substance.” I hypothesize that these limits to narrative persuasion derive not only from a competing institutional logic, but also from narrative’s malleability and conventional usage. I contribute the first catalog of startup stories and a novel theory of narrative backlash to entrepreneurship research, strategic communication, and institutional theory.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Bayes via ERM and Rademacher complexities: the Poisson model</title>
<link href="https://hdl.handle.net/1721.1/152656" rel="alternate"/>
<author>
<name>Teh, Anzo Zhao Yang</name>
</author>
<id>https://hdl.handle.net/1721.1/152656</id>
<updated>2023-11-03T03:04:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Empirical Bayes via ERM and Rademacher complexities: the Poisson model
Teh, Anzo Zhao Yang
We consider the problem of empirical Bayes estimation for (multivariate) Poisson means. Existing solutions that have been shown theoretically optimal for minimizing the regret (excess risk over the Bayesian oracle that knows the prior) have several shortcomings. For example, the classical Robbins estimator does not retain the monotonicity property of the Bayes estimator and performs poorly under moderate sample size. Estimators based on&#13;
the minimum distance and non-parametric maximum likelihood (NPMLE) methods correct these issues, but are computationally expensive with complexity growing exponentially with&#13;
dimension. Extending the approach of Barbehenn and Zhao (2022), in this work we construct monotone estimators based on empirical risk minimization (ERM) that retain similar theoretical guarantees and can be computed much more efficiently. Adapting the idea of offset Rademacher complexity Liang et. al (2015) to the non-standard loss and function class in empirical Bayes, we show that the shape-constrained ERM estimator attains the minimax regret within constant factors in one dimension and within logarithmic factors in multiple dimensions.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Spatial Accelerator Generation and Optimization for Tensor Applications</title>
<link href="https://hdl.handle.net/1721.1/152655" rel="alternate"/>
<author>
<name>Zhang, Zhekai</name>
</author>
<id>https://hdl.handle.net/1721.1/152655</id>
<updated>2023-11-03T03:03:07Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Spatial Accelerator Generation and Optimization for Tensor Applications
Zhang, Zhekai
Modern foundation models and generative AI applications require multiple input modalities (both vision and language), which increases the demand for flexible accelerator architecture. &#13;
&#13;
Existing frameworks suffer from the trade-off between design flexibility and productivity of RTL generation: either limited to very few hand-written templates or cannot automatically generate the RTL.&#13;
&#13;
To address this challenge, we propose the LEGO framework, which automatically generates and optimizes spatial architecture design in the front end and outputs synthesizable RTL code in the back end without RTL templates. LEGO front end finds all possible interconnections between function units and determines the memory system shape by solving the integer linear equations, and establishes the connections by a minimum-spanning-tree-based algorithm and a breadth-first-search-based heuristic algorithm for merging different spatial dataflow designs. LEGO back end then translates the hardware in a primitive-level graph to perform lower-level optimizations, and applies a set of linear-programming&#13;
algorithms to optimally insert pipeline registers and reduce the overhead of unused logic when switching spatial dataflows.&#13;
&#13;
Our evaluation demonstrates that LEGO can achieve 3.2× speedup and 2.4× energy efficiency compared to previous work Gemmini, and can generate one architecture for diverse modern foundation models in generative AI applications.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving Patient Access and Comprehension of Clinical&#13;
Notes: Leveraging Large Language Models to Enhance&#13;
Readability and Understanding</title>
<link href="https://hdl.handle.net/1721.1/152654" rel="alternate"/>
<author>
<name>Mannhardt, Niklas</name>
</author>
<id>https://hdl.handle.net/1721.1/152654</id>
<updated>2023-11-03T03:57:24Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Improving Patient Access and Comprehension of Clinical&#13;
Notes: Leveraging Large Language Models to Enhance&#13;
Readability and Understanding
Mannhardt, Niklas
Patient access to clinical notes has demonstrated numerous benefits, including an increased sense of control over their condition, enhanced engagement, improved medication adherence, and greater clinician accountability. However, the presence of medical jargon, abbreviations, and complex medical concepts within clinical notes hinders patient comprehension, thus diminishing the positive effects of note accessibility. These notes, primarily intended for clinicians, often appear disorganized and contain an abundance of technical terms. Breast cancer patients, in particular, face information overload and experience taxing symptoms related to their treatment, exacerbating this issue. Although some clinicians are adapting their writing style to meet patients’ needs, time constraints limit the feasibility of comprehensive note-taking. We propose the development of a patient-facing tool, in the form of a web application, to make information contained in clinical notes more accessible by leveraging machine learning models to simplify, summarize, extract information from, and add context to clinical notes. Through a series of user studies, we demonstrate that our proposed augmentations to clinical notes significantly improve comprehension and enhance patients’ reading experience.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Polyethylene-Based Multifunctional Composite Material for Radiation Shielding, Passive Thermoregulation, and In-Situ Fabrication for Space Exploration</title>
<link href="https://hdl.handle.net/1721.1/152651" rel="alternate"/>
<author>
<name>Xu, Duo</name>
</author>
<id>https://hdl.handle.net/1721.1/152651</id>
<updated>2023-11-03T03:02:09Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Polyethylene-Based Multifunctional Composite Material for Radiation Shielding, Passive Thermoregulation, and In-Situ Fabrication for Space Exploration
Xu, Duo
Ionizing radiation sources like energetic electrons, protons, gamma rays, and secondary particles, such as thermal neutrons, are abundant in low Earth orbit (LEO) and deep space environments like lunar and Martian surfaces, where space missions are often conducted. Such sources of space radiation can induce severe damage to human tissue and critical electronic components, thereby posing significant risks to space exploration. Moreover, the absence of convection in space confines heat removal exclusively to thermal radiation, which negatively impacts the performance of electronic components, unless they are designed to function at elevated temperatures. Due to the considerably high cost of transporting materials and equipment to space, a cost-effective solution for mitigating ionizing radiation and overheating risks is crucial, especially for extended, deep-space missions. This thesis presents a polyethylene-based multifunctional composite material aiming to achieve simultaneous space radiation attenuation, passive thermoregulation, and in-situ fabrication.&#13;
&#13;
The first component of the thesis explores the aspect of radiation shielding. While polyethylene is recognized as one of the best candidates for primary radiation shielding from Galactic Cosmic Rays (GCRs) and Solar Particle Events (SPEs), it does not adequately mitigate secondary particles. The proposed composite material, comprising polyethylene and boron-rich fillers, aims to match polyethylene's GCR and SPE performance while enhancing thermal neutron attenuation, the predominant secondary particle. The radiation shielding performance against GCR and SPE on the Martian surface is illustrated through a deterministic radiation transport simulation tool OLTARIS, and the performance of thermal neutron attenuation is demonstrated via a Monte Carlo particle transport tool PHITS and confirmed by EQ-SANS measurements.&#13;
&#13;
The second component of the thesis addresses the feasibility of additive in-situ fabrication via fused deposition modeling (FDM). Due to the difficulties of FDM printing polyethylene, the development of reliable printing approaches has been shown to be a challenging task. In the thesis, a reliable printing process is reported for an optimized blend of different polyethylene resins, with or without various nanoparticles as dopants.&#13;
&#13;
The third component focuses on passive thermoregulation performance. The optimized FDM printing process enables the fabrication of polyethylene with various fillers, hence providing design flexibility in filler selection, including compounds with boron and even other materials. With this flexibility, a coupled optics-heat transfer model has been developed to select materials providing both shielding and passive thermal regulation properties. This model accounts for the penetration of solar irradiation, power generation from inside, and temperature gradient across the layer, establishing the relationship between the optical properties of each material component and the temperature of the inner layer of the multifunctional material.&#13;
&#13;
Although this thesis primarily targets extraterrestrial applications, the techniques developed in each component have broader applicability. For instance, the radiation transport simulation could be employed in other radiation environments such as nuclear reactors; the optimized FDM printing process allows for the additive manufacture of polyethylene, a versatile and affordable thermoplastic material not previously reliably fabricated; and the coupled optics-heat transfer model has broader applications such as thermal transport in multilayer systems characterized by significant temperature gradients and solar irradiation penetration.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Information-theoretic Algorithms for Model-free&#13;
Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/152649" rel="alternate"/>
<author>
<name>Wu, Farrell Eldrian S.</name>
</author>
<id>https://hdl.handle.net/1721.1/152649</id>
<updated>2023-11-03T03:36:33Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Information-theoretic Algorithms for Model-free&#13;
Reinforcement Learning
Wu, Farrell Eldrian S.
In this work, we propose a model-free reinforcement learning algorithm for infinte-horizon, average-reward decision processes where the transition function has a finite yet unknown dependence on history, and where the induced Markov Decision Process is assumed to be weakly communicating. This algorithm combines the Lempel-Ziv (LZ) parsing tree structure for states introduced in [4] together with the optimistic Q-learning approach in [9]. We mathematically analyze the algorithm towards showing sublinear regret, providing major steps towards the proof of such. In doing so, we reduce the proof to showing sub-linearity of a key quantity related to the sum of an uncertainty metric at each step. Simulations of the algorithm will be done in a later work.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Memory Efficiency and Accuracy in&#13;
Spectral-Based Graph Transformers</title>
<link href="https://hdl.handle.net/1721.1/152648" rel="alternate"/>
<author>
<name>Ho, Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/152648</id>
<updated>2023-11-03T03:32:42Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Balancing Memory Efficiency and Accuracy in&#13;
Spectral-Based Graph Transformers
Ho, Kelly
The transformer architecture has been a significant driving force behind advancements in deep learning, yet transformer-based models for graph representation learning have not caught up to mainstream Graph Neural Network (GNN) variants. A major limitation is the large O(&#119899;2) memory consumption of graph transformers, where &#119899; is the number of nodes. Therefore, we develop a memory-efficient graph transformer for node classification, capable of handling graphs with thousands of nodes while maintaining accuracy. Specifically, we reduce the memory use in the attention mechanism and add a random-walk positional encoding to improve upon the SAN graph transformer architecture. We evaluate our model on standard node classification benchmarks: Cora, Citeseer, and Chameleon. Unlike SAN, which runs out of memory, our memory-efficient graph transformer can be run on these benchmarks. Compared with landmark GNN models GCN and GAT, our graph transformer requires 27.92% less memory while being competitive in accuracy.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Low-Cost Fiber Extrusion Device for Educational Purposes: Redesign, Manufacture, and Computer Vision Integration</title>
<link href="https://hdl.handle.net/1721.1/152647" rel="alternate"/>
<author>
<name>Sefah, Gary</name>
</author>
<id>https://hdl.handle.net/1721.1/152647</id>
<updated>2023-11-03T03:41:19Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Low-Cost Fiber Extrusion Device for Educational Purposes: Redesign, Manufacture, and Computer Vision Integration
Sefah, Gary
The Fiber Extrusion Device (FrED) serves as a hands-on learning tool and laboratory experience, simulating the continuous fiber draw process to provide insights into data acquisition, control systems, and smart manufacturing. This system enables learners to conduct experiments, manipulate manufacturing parameters and control systems, gather data, and conduct analyses. While successful classroom activities have been conducted using FrED, the preceding model's cost precludes widespread distribution for remote learning, a growing trend in education.&#13;
&#13;
This thesis encompasses a series of enhancements to FrED, aimed at refining its stability, cooling mechanisms, modularity, noise reduction, size, and overall functionality. Pulley variations were introduced to enhance fiber stability. Cooling strategies and pulley system's flexibility were optimized for the stability of the fiber, and noise reduction measures focused on the gear system. The camera system underwent significant redesigning, enabling more precise fiber diameter measurement. In addition to that, a shift from Teensy to Raspberry Pi improved system integration. Code for extrusion and gear motors, heater, and thermistor was rewritten, alongside redesigns of the extrusion system, PCB, and camera module.&#13;
&#13;
The final FrED design accomplished a 42% cost reduction ($159) and a weight reduction of 25% (1.7 kg) with optimal fiber cooling and stability, seamless integration of computer vision for diameter measurement and data collection was achieved, enabling its application in PID control and enhancing the teaching of machine learning principles.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Cost Optimization in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs</title>
<link href="https://hdl.handle.net/1721.1/152646" rel="alternate"/>
<author>
<name>Liggett, J. Chandler</name>
</author>
<id>https://hdl.handle.net/1721.1/152646</id>
<updated>2023-11-03T03:34:55Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Cost Optimization in Sheet Metal Manufacturing by Tuning the Sheet Metal Nesting Strategy Based on Sheet Utilization and Downstream Part Handling Costs
Liggett, J. Chandler
The cutting of sheet metal blanks from raw sheet stock is a crucial process in the sheet metal fabrication industry. One of the primary cost drivers for this process is sheet utilization, which is the amount of raw material processed into a usable blank compared to the total raw material processed. Nesting is a method that efficiently packs blanks onto raw sheets with the aim of reducing scrap generation by improving material utilization. Modern nesting algorithms are quite successful at maximizing sheet utilization given an explicit set of available raw sheets and a set of blanks defined as candidates for nesting. Because of this, nesting efficiency and thus sheet utilization are primarily determined by the characteristics of the candidate blanks and the number of candidate blanks that can be nested together. Nesting strategies may be chosen to include the maximum number of possible candidate blanks for maximized efficiency. On the other hand, nesting strategies may instead restrict the available part candidates for the purpose of reducing sorting and handling complexities downstream of the cutting operation. In between these two extremes, it is hypothesized that there exists an optimum nesting strategy that balances improved sheet utilization with the negative cost effect of more intensive handling requirements. In this work, the effect of varying nesting strategies on sheet utilization is studied in the context of a sheet metal manufacturing operation with plant locations across the globe. Cost models are produced that inform the selection of a globally optimized nesting strategy, and throughput models are considered which inform the validity of cost-optimized strategies. Additionally, regional differences in cost drivers are studied, and an optimized nesting strategy is validated for deployment across global plant locations. This work provides a detailed approach to optimizing sheet utilization in sheet metal manufacturing through selection of an optimized nesting strategy.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigations into Ultra-Low-Power Underwater Imaging</title>
<link href="https://hdl.handle.net/1721.1/152645" rel="alternate"/>
<author>
<name>Naeem, Nazish</name>
</author>
<id>https://hdl.handle.net/1721.1/152645</id>
<updated>2023-11-03T03:21:43Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Investigations into Ultra-Low-Power Underwater Imaging
Naeem, Nazish
Imaging underwater environments is crucial to advancing our understanding of marine organisms, climate change, marine geology, aquaculture farming, and underwater archaeology. Despite significant advances in underwater imaging, scalable and longterm imaging of underwater environments is still an open problem. One of the main challenges in scalably imaging the ocean is that existing underwater cameras are too power-hungry for long-term observations. Recent work on ultra-low-power underwater imaging has shown that in-situ wireless underwater imaging is possible using fully submerged battery-free cameras and acoustic backscatter. Even though this is a promising advance, enabling truly useful ultra-low-power underwater imaging remains difficult due to many challenges and constraints including poor image quality (due to marine snow, hazing, and lighting conditions), limited energy, limited memory and computational power, and low bandwidth of the acoustic channel.&#13;
&#13;
This thesis investigates the various challenges that efficient and ultra-low-power underwater imaging faces and offers directions for solving them. In particular, we first survey the various challenges of ultra-low-power underwater imaging. Subsequently, we offer three solutions for addressing these challenges. First, we propose a simple denoising/desnowing method for ultra-low-power underwater imaging that shows ∼ 2&#119889;&#119861; improvement in the quality of the images while reducing the memory consumption by ∼ 17x when compared to the state-of-the-art systems. Second, we perform ultra-low-power underwater edge inference that is ∼ 19x more memory efficient when compared to the baseline model with comparable accuracies. Then, we propose a solution for enabling ultra-low-power color imaging that is ∼ 10x less power-hungry than the state-of-the-art battery-free underwater imaging system. We conclude by offering a path to integrating these solutions into future end-to-end ultra-low-power underwater imaging systems.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optically Controlled Vertical GaN finFET for Power Applications</title>
<link href="https://hdl.handle.net/1721.1/152641" rel="alternate"/>
<author>
<name>Hsia, Jung-Han</name>
</author>
<id>https://hdl.handle.net/1721.1/152641</id>
<updated>2023-11-03T03:40:16Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Optically Controlled Vertical GaN finFET for Power Applications
Hsia, Jung-Han
With the increasing demand for electricity, efficient power electronics with high voltage and current capabilities become crucial in many applications. However, current power devices are mostly electrically triggered. Multilevel converters made of such devices often require complicated gate-driving circuits and are susceptible to electromagnetic interference (EMI). Optically triggered power devices can significantly improve circuit complexity, EMI susceptibility, and system reliability.&#13;
&#13;
This thesis presents the first demonstration of an optically controlled vertical GaN finFET. The first part of the thesis describes the physics and design of the device assisted by simulation, followed by the fabrication using a Design-Technology Co-Optimization (DTCO) approach. Finally, device measurements are presented. Our devices have shown a maximum current density of J subscript DS &gt; 90A/cm² at V subscript DS = 3 V, triggered by a low-power 365nm LED, which translates into optical responsivity greater than 10⁵A/W. These preliminary results have shown promising aspects of our devices to enable future high voltage, high current power systems.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design of a Mass-Manufacturable Globally Distributable Passive Prosthetic Foot</title>
<link href="https://hdl.handle.net/1721.1/152640" rel="alternate"/>
<author>
<name>Irani, Urvaksh</name>
</author>
<id>https://hdl.handle.net/1721.1/152640</id>
<updated>2023-11-03T03:18:33Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Design of a Mass-Manufacturable Globally Distributable Passive Prosthetic Foot
Irani, Urvaksh
A lack of affordable energy storage and return (ESR) prosthetic feet compels amputees in low and middle income countries (LMIC) to adopt feet that do not meet the performance of ESR feet distributed in high-income countries. The GEAR Lab at MIT developed the LLTE design framework that systematically alters the geometry and stiffness of a foot to design ESR feet using a low cost material (Nylon 6/6) that enables close replication of able-bodied gait. LLTE-optimized foot prototypes have been tested in a long-term field trial in India and in gait labs in the United States. The feet demonstrated robustness to use for activities of daily living in India, and as good or better biomechanical performance and user satisfaction than commercial carbon fiber feet sold in the United States. However, these prototypes were not designed to be commercial products, but rather to demonstrate the viability of the LLTE design framework. The prototypes were CNC machined, resulting in a cost of &gt;$200 per foot (a significant expense for many individuals in LMIC) and were only compatible with a single attachment system, thus limiting the potential for adoption by LMIC distributors, each with their own unique attachment system. This thesis aims to translate these proof-of-concept prototypes to commercial products by making the foot mass-manufacturable, easily adoptable by major distribution networks, and incorporating a few upgrades: improved aesthetics, coronal compliance, and a sandal toe.&#13;
&#13;
The upgraded foot described in this thesis is comprised of a mass-manufacturable keel encased in a Polyurethane foam overmold resembling a biological footwith a ruggedized sole and two swappable attachment modules. The swappable attachment modules can be easily fastened to the foot to facilitate dissemination through the major distribution networks in LMIC. The first module ensures compatibility with the Bhagwan Mahavir Viklang Sahayta Samiti (BMVSS) attachment system, while the second module makes the foot compatible with both the ICRC attachment system and a pyramid adaptor. An upgraded architecture with a c-channel cross-section (to facilitate injection molding)was incorporated into the LLTE design framework and an optimization for a 60 kg person with a size 7 foot was run. The resulting optimized design has an LLTE value of ~0.1 and is thus expected to retain the high performance of previously tested LLTE prototypes.&#13;
&#13;
The mass-manufacturable keel was mechanically tested to validate that it behaved as predicted, and over-molded by Vibram to result in a final prototype. The prototypes will be ISO tested and then used in a field trial to compare their performance to existing LMIC feet. Following the field trial, a sizing system for a product line (with a finite amount of feet) will be developed such that a large percentage of the population can be prescribed a foot that is either optimal or close-to-optimal for them. Commercialization of this upgraded foot would offer amputees an affordable ESR option that can readily be adopted by major distribution networks in LMIC.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Motion Prediction for Efficient Human-Robot&#13;
Collaboration</title>
<link href="https://hdl.handle.net/1721.1/152639" rel="alternate"/>
<author>
<name>Kothari, Aadi</name>
</author>
<id>https://hdl.handle.net/1721.1/152639</id>
<updated>2023-11-03T03:56:19Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Real-Time Motion Prediction for Efficient Human-Robot&#13;
Collaboration
Kothari, Aadi
Human motion prediction is an essential step for efficient and safe human-robot collaboration. Current methods either purely rely on representing the human joints in some form of neural network-based architecture or use regression models offline to fit hyper-parameters in the hope of capturing a model encompassing human motion. While these methods provide good initial results, they are missing out on leveraging well-studied human body kinematic models as well as body and scene constraints which can help boost the efficacy of these prediction frameworks while also explicitly avoiding implausible human joint configurations. We propose a novel human motion prediction framework that incorporates human joint constraints and scene constraints in a Gaussian Process Regression (GPR) model to predict human motion over a set time horizon. This formulation is combined with an online context-aware constraints model to leverage task-dependent motions. It is tested on a human arm kinematic model and implemented on a human-robot collaborative setup with a UR5 robot arm to demonstrate the real-time capability of our approach. Simulations were also performed on datasets like HA4M and ANDY. The simulation and experimental results demonstrate considerable improvements in a Gaussian Process framework when these constraints are explicitly considered.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Modular Visual Data Manipulation Framework for Data Exploration in the Consumer Packaged Goods Industry</title>
<link href="https://hdl.handle.net/1721.1/152637" rel="alternate"/>
<author>
<name>Huang, Allen</name>
</author>
<id>https://hdl.handle.net/1721.1/152637</id>
<updated>2023-11-03T03:01:56Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Developing a Modular Visual Data Manipulation Framework for Data Exploration in the Consumer Packaged Goods Industry
Huang, Allen
The rapidly increasing reliance on data analytics to drive strategic decision-making in today’s digital economy means that efficient and user-friendly data analysis tools are becoming increasingly important. Even as understanding and manipulating data becomes more critical, the technical complexity of traditional query languages like SQL often poses a substantial barrier to non-technical users.&#13;
&#13;
In this thesis, we present a fully visual analytics framework that can be arbitrarily integrated with relational data stored in an analytics platform. We describe the design and implementation of a frontend client by which nontechnical users can construct rich queries involving relational operations such as aggregations and filters on promotional data and view their outputs in tabular or graphical form. We also describe a protocol for uniquely and unambiguously describing these queries and the design and implementation of an engine by which these queries are efficiently executed.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Neuro-Symbolic Skills for Bilevel Planning</title>
<link href="https://hdl.handle.net/1721.1/152636" rel="alternate"/>
<author>
<name>Athalye, Ashay</name>
</author>
<id>https://hdl.handle.net/1721.1/152636</id>
<updated>2023-11-03T04:06:14Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Learning Neuro-Symbolic Skills for Bilevel Planning
Athalye, Ashay
It is challenging for robots to solve tasks in environments with continuous state and action spaces, long horizons, and sparse feedback. Hierarchical approaches such as task and motion planning (TAMP) address this challenge, enabling efficient problem solving by decomposing decision-making into two or more levels of abstraction. In a setting where expert demonstrations, symbolic predicates for state abstraction, and manually designed parameterized policies are given, prior work has shown how to learn symbolic operators and neural samplers for TAMP. But Manually designing parameterized policies can be difficult and impractical, so we would instead like our agent to learn them. In this work, we develop a method for learning parameterized polices in combination with operators and samplers from demonstrations. These components are packaged into modular neuro-symbolic skills and sequenced together with search-then-sample TAMP to solve new tasks. In experiments in four robotics domains, we show that our approach – bilevel planning with neuro-symbolic skills – can solve a wide range of tasks with varying initial states, objects, and goals, outperforming six baselines and ablations.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>CS2 Student Programming Performance Prediction and Intervention</title>
<link href="https://hdl.handle.net/1721.1/152634" rel="alternate"/>
<author>
<name>Dargan, Hope</name>
</author>
<id>https://hdl.handle.net/1721.1/152634</id>
<updated>2023-11-03T03:14:01Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">CS2 Student Programming Performance Prediction and Intervention
Dargan, Hope
As the number of students in a course grows, it becomes increasingly difficult for instructors to identify and help students who are struggling to develop good understanding of the material. This study investigates scalable prediction and intervention methods in the context of 6.101, an intermediate programming course at MIT. First, a broad investigation was conducted into early predictive factors of students who earn a C, D, F, or Withdraw (CDFW) from 6.101. Results suggested that limited prior programming experience was associated with higher CDFW rates, as were other factors such as high amounts of early office hour usage and lower grades in certain prerequisites. Prediction efforts focused on students of interest (SOI) who initially committed to the course but were likely to earn a C, D, F or Later Withdraw (CDFLW) from 6.101. A hand-tuned model that combined various predictive factors identified SOI with 75 percent accuracy (13 percent sensitivity, 90 percent specificity) three weeks into the semester. In order to help SOI develop their programming skills, encourage independent problem solving, and increase feelings of belonging and community within the CS department, a series of optional weekly programming practice sessions were developed and implemented. While the results of the intervention are inconclusive due to the small number of students who attended sessions and responded to post-semester surveys, the available data from two semesters suggests that the intervention had limited impact in all three design areas. Overall SOI had lower exam scores, received more help with assignments, and reported lower ratings of belonging and community at the end of the semester compared to non-SOI. These findings have potential broader implications for how “at-risk” students are defined, how predictive models are created and used, and how interventions are designed.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A mechanically-derived contact model for adhesive&#13;
elastic-perfectly plastic particles</title>
<link href="https://hdl.handle.net/1721.1/152633" rel="alternate"/>
<author>
<name>Zunker, William R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152633</id>
<updated>2023-11-03T03:03:12Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">A mechanically-derived contact model for adhesive&#13;
elastic-perfectly plastic particles
Zunker, William R.
A contact model able to capture the response of interacting adhesive elastic-perfectly plastic particles under a variety of loadings is presented. The contact model is valid through each of the three major contact regimes: elastic, fully-plastic, and bulk elastic---all with and without adhesion. &#13;
&#13;
In the elastic through fully-plastic contact regimes the model is built upon the Method of Dimensionality Reduction which allows the problem of a 3D axisymmetric contact to be mapped to a semi-equivalent 1D problem of a rigid indenter penetrating a bed of independent Hookean springs. Plasticity is accounted for by continuously varying the 1D indenter profile subject to a constraint on the contact pressure. Unloading falls out naturally, and simply requires lifting the 1D indenter out of the springs and tracking the force. Notably, by accounting for the incompressible nature of this plastic deformation, the contact model is able to detect and evolve secondary contacts caused by outward displacement of the free surface with good precision. JKR type adhesion is recovered seamlessly by simply allowing the springs to `stick’ to the 1D indenter's surface. &#13;
&#13;
To complete the contact model an additional treatment for the bulk elastic contact regime, characterized by a rapid stiffening in the force-displacement curve, is proposed. A simple formulation is presented for an additional bulk elastic force related to the particle's mean surface displacement, contact areas, particle volume, and bulk modulus. A novel criterion for triggering this force (i.e. detecting the bulk elastic regime) related to the remaining free surface area of the particle is also given. This bulk elastic force is then superimposed with the force response given by the Method of Dimensionality Reduction to achieve a contact model capable of capturing a variety of complex loadings. In this way, the methodology for treating the bulk elastic regime presented here stands independent and could be appended to any contact model. &#13;
&#13;
Direct comparison of all elements of the contact model are made to finite element simulations revealing the accurate predictive capabilities of the contact model.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunities for System Dynamics Research in Operations Management for Public Policy</title>
<link href="https://hdl.handle.net/1721.1/152631" rel="alternate"/>
<author>
<name>Lopez, Jose</name>
</author>
<id>https://hdl.handle.net/1721.1/152631</id>
<updated>2023-11-03T03:28:08Z</updated>
<published>2023-09-01T00:00:00Z</published>
<summary type="text">Opportunities for System Dynamics Research in Operations Management for Public Policy
Lopez, Jose
Operations management in the public policy context is extremely complex with many mutually interacting factors characterized by feedback loops, delays and nonlinearities as well multiple stakeholders pursuing divergent objectives. Prior researchers have called for a systems approach in these contexts, arguing that standard OM methodologies such as mathematical programming, and queuing theory often cannot fully address these problems. In this work, we create a roadmap for researchers—both those who are familiar with systems dynamics and those who are not—for the expanded use of system dynamics studying public policy-related OM problems. We review and organize relevant system dynamics literature in both traditional operations management venues as well as public policy venues unfamiliar to OM audiences. We then identify a set of interesting open questions and potential system dynamics building blocks for answering them by topic. Leveraging this review, we describe under what conditions system dynamics is most appropriate. We then identify several overarching methodological and domain gaps for future research. Finally, we propose a process for using system dynamics with traditional operations management methodologies.
</summary>
<dc:date>2023-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Evaluation of Ion Transfer Capillary Geometry on Sensitivity of a Desorption Electrospray Ionization and Mass Spectrometry System</title>
<link href="https://hdl.handle.net/1721.1/152612" rel="alternate"/>
<author>
<name>Vinakollu, Nagashumrith Venkata</name>
</author>
<id>https://hdl.handle.net/1721.1/152612</id>
<updated>2023-11-01T04:08:19Z</updated>
<published>2021-02-01T00:00:00Z</published>
<summary type="text">Evaluation of Ion Transfer Capillary Geometry on Sensitivity of a Desorption Electrospray Ionization and Mass Spectrometry System
Vinakollu, Nagashumrith Venkata
This research examines the effect of ion transfer capillary geometry on the sensitivity of the Desorption Electrospray Ionization – Mass Spectrometry (DESI-MS) process.  Previous work has shown that heating the ion transfer capillary to 450˚C will improve the resolution of images taken by the DESI-MS owing to increased ion desolvation. This thesis studies how changing the inner diameter and cross-sectional profile of the capillary can improve ion desolvation by increased heat transfer to the center of the flow. Increasing heat transfer efficiency can obviate such high temperatures and will increase flexibility in the design of the capillary heater.&#13;
&#13;
The setup of this experiment involved making modifications to existing components to allow for rapid testing of many ion transfer capillary geometries. Mass spectrum data for sample sections of pig liver were collected, as these biological samples are acceptably homogenous with a known mass-to-charge (m/z) ratio of 885. Signal intensity within the 880- 890 m/z range is analyzed to reveal the impact of capillary geometry. Sources of variation, such as within-sample variation and sample-to-sample variation, are characterized to reveal the true impact of the variables.&#13;
&#13;
The results show that decreasing max particle distance from a wall can increase the sensitivity of the ion flow to heating. The best capillary cross-section provides nearly a 4x increase in sensitivity when compared to a circular capillary with a similar flow area. Pursuing these capillary designs will improve not only sample resolution and imaging time but also the client satisfaction of the Waters DESI- MS system.
</summary>
<dc:date>2021-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An empirical test of the Modigliani-Miller model of market valuation of growth firms</title>
<link href="https://hdl.handle.net/1721.1/152607" rel="alternate"/>
<author>
<name>Lewis, William Stewart.</name>
</author>
<id>https://hdl.handle.net/1721.1/152607</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1964-01-01T00:00:00Z</published>
<summary type="text">An empirical test of the Modigliani-Miller model of market valuation of growth firms
Lewis, William Stewart.
Thesis: M.S., Massachusetts Institute of Technology, School of Industrial Management, 1964; Appendix contains numerous pamphlets.; Includes bibliographical references (leaf 76).
</summary>
<dc:date>1964-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simulation and comparison of a permanent magnet DC brushless motor, induction motor, and variable reluctance motor</title>
<link href="https://hdl.handle.net/1721.1/152604" rel="alternate"/>
<author>
<name>Grunden, Joanne B.
            (Joanne Barbara)</name>
</author>
<id>https://hdl.handle.net/1721.1/152604</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1992-01-01T00:00:00Z</published>
<summary type="text">Simulation and comparison of a permanent magnet DC brushless motor, induction motor, and variable reluctance motor
Grunden, Joanne B.
            (Joanne Barbara)
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1992; Title as it appears in the M.I.T. Graduate List, June 1992: Modeling, simulation, and comparison of a permanent magnet DC brushless motor, induction motor, and variable reluctance motor.; Includes bibliographical references (leaf 188).
</summary>
<dc:date>1992-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigation on elastic anisotropy on x-ray stress measurement.</title>
<link href="https://hdl.handle.net/1721.1/152600" rel="alternate"/>
<author>
<name>Li, Fook-Kow.</name>
</author>
<id>https://hdl.handle.net/1721.1/152600</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Investigation on elastic anisotropy on x-ray stress measurement.
Li, Fook-Kow.
Thesis: M.S., Massachusetts Institute of Technology, Department of Metallurgy, 1945; Bibliography: leaves 78-80.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An investigation on constant-pressure combustion turbine cycles with water injection.</title>
<link href="https://hdl.handle.net/1721.1/152599" rel="alternate"/>
<author>
<name>Hu, Hesheng,
            1928-</name>
</author>
<id>https://hdl.handle.net/1721.1/152599</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">An investigation on constant-pressure combustion turbine cycles with water injection.
Hu, Hesheng,
            1928-
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1945; Bibliography: leaf 13.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Copolymerization of styrene and ethyl maleate.</title>
<link href="https://hdl.handle.net/1721.1/152598" rel="alternate"/>
<author>
<name>Leff, Miriam W.</name>
</author>
<id>https://hdl.handle.net/1721.1/152598</id>
<updated>2025-10-30T17:51:29Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Copolymerization of styrene and ethyl maleate.
Leff, Miriam W.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemistry, 1945; Bibliography; leaf 27.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Action of inorganic dehydrating agents on ethyl lactate.</title>
<link href="https://hdl.handle.net/1721.1/152597" rel="alternate"/>
<author>
<name>Hidalgo, Fausto Gaston.</name>
</author>
<id>https://hdl.handle.net/1721.1/152597</id>
<updated>2025-10-30T17:03:42Z</updated>
<published>1945-01-01T00:00:00Z</published>
<summary type="text">Action of inorganic dehydrating agents on ethyl lactate.
Hidalgo, Fausto Gaston.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1945; Bibliography: leaves 32-33.
</summary>
<dc:date>1945-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A theoretical investigation of the excitation of interstellar formaldehyde.</title>
<link href="https://hdl.handle.net/1721.1/152593" rel="alternate"/>
<author>
<name>Halket, Thomas Daniel.</name>
</author>
<id>https://hdl.handle.net/1721.1/152593</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1971-01-01T00:00:00Z</published>
<summary type="text">A theoretical investigation of the excitation of interstellar formaldehyde.
Halket, Thomas Daniel.
Thesis: M.S., Massachusetts Institute of Technology, Department of Physics, 1971; Vita.; Bibliography: leaves 62-63.
</summary>
<dc:date>1971-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An analysis of motivational patterns of managers and technical professionals in a manufacturing and development installation</title>
<link href="https://hdl.handle.net/1721.1/152588" rel="alternate"/>
<author>
<name>Rogers, James L.</name>
</author>
<id>https://hdl.handle.net/1721.1/152588</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1986-01-01T00:00:00Z</published>
<summary type="text">An analysis of motivational patterns of managers and technical professionals in a manufacturing and development installation
Rogers, James L.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1986; Bibliography: leaves 117-119.
</summary>
<dc:date>1986-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design for More Equitable Neighborhood Adaptation: Climate Resiliency and Public Space Planning in U.S. Border Colonias</title>
<link href="https://hdl.handle.net/1721.1/152510" rel="alternate"/>
<author>
<name>Strech, Mikaela</name>
</author>
<id>https://hdl.handle.net/1721.1/152510</id>
<updated>2023-10-19T03:40:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design for More Equitable Neighborhood Adaptation: Climate Resiliency and Public Space Planning in U.S. Border Colonias
Strech, Mikaela
The relationship between environmental harms and the political and economic marginalization of communities cannot be easily disentangled in today’s world. Consequently, this thesis reexamines the relationships between planners, designers, and communities in response to environmental challenges that marginalized communities face. I advocate for beginning with incremental advancements in adaptation in design using community organization and a site and services approach as a way of contending with resource constraints and urgent issues. Acknowledging that this design work simultaneously enhances social resiliency, I argue that the timeliness of this approach promotes resilience.&#13;
&#13;
The research analyzes design and planning strategies for neighborhood scale environmental design, drawing from case studies in Puerto Rico, Detroit, Nairobi, and Texas. These insights inform conceptual framework plans in three neighborhoods to test what an incremental, nature-based approach to environmental hazards might accomplish, and how. This thesis has a specific focus on US border colonias in Texas, where flooding and disparities in adaptation and recovery resources are especially relevant. Considering the projected growth of fringe neighborhoods across the United States, this study contributes to the dialogue on equitable resilience.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Awarding Equitably: a process design framework for city grantmakers</title>
<link href="https://hdl.handle.net/1721.1/152509" rel="alternate"/>
<author>
<name>Kalish, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/152509</id>
<updated>2023-10-19T03:05:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Awarding Equitably: a process design framework for city grantmakers
Kalish, Sarah
Internal processes such as hiring, procurement, and grantmaking are the hidden engine that power the delivery of local government services. My research begins with a case study on designing the City of Boston’s organization-wide grantmaking process to standardize procedures. This effort became a priority due to the influx of ARPA funding, among other drivers related to digital transformation and a new mayoral administration. Through interviews with grants program managers, I documented the steps in the grants process and codified shared best practices in a grants process user guide. This initial exercise was a mechanical one, which was limited as other considerations and values, namely equity, were integral to work but only implicitly embedded in grantmaking.&#13;
&#13;
In my research to develop a more holistic process design framework, I discovered a gap in the literature on internally focused process design in public sector organizations. The process improvement discipline comes closest, but still lacks a systematic discussion of factors that influence process, including values, structures, norms, practices, and politics. In identifying these influences, I construct a framework that serves as an actionable toolkit for practitioners across government settings. I define five influences: philosophical values, organizational structures, cultural norms, operational practices, and political forces. For each, I outline definitions, principles, guiding questions, and complementary exercises. Then I apply the framework to analyze the Community Preservation Act (CPA), a Massachusetts-wide municipal grant program.&#13;
&#13;
There are further opportunities to apply the “five influences” framework to other internal processes across organizational contexts in public, private, and nonprofit sectors. Most importantly, the framework application must be user-friendly and actionable, and thoughtfully integrated into internal operations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power and Control in Disinvested Affordable Housing: San Francisco’s Limited Equity Housing Co-operatives</title>
<link href="https://hdl.handle.net/1721.1/152508" rel="alternate"/>
<author>
<name>Cohen, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/152508</id>
<updated>2023-10-19T03:57:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Power and Control in Disinvested Affordable Housing: San Francisco’s Limited Equity Housing Co-operatives
Cohen, Dylan
The promise of the co-operative housing typology extends beyond providing stable, affordable housing. Co-operatives strive to offer a resident-centered site of democratic participation, where ownership and limited equity combine to provide both collective and shareholder ownership of a valuable community asset. Contentiously, local governments and civic institutions seek certainty and control in housing, prioritizing technical expertise and institutional relationships over deeper investment in resident-owner capacity. Affordable housing practitioners face complex and politicized projects, where co-op health is often threatened by mistrust, institutional failures, and funding scarcity. &#13;
&#13;
In San Francisco, more than 2,000 limited equity housing co-operative units constitute a significant portion of the city’s legacy 1960s and 70s federally-funded housing stock. Co-ops routinely fall into crisis, where residents rely on dysfunctional boards, ill-suited housing management companies, and insufficient government support for their survival. Numerous co-ops face critical survival questions, including deferred maintenance and disrepair, potential redevelopment, political instability, and waning institutional support.&#13;
&#13;
This client-linked thesis delves into the landscape of one local government's relationship with its co-operative housing ecosystem. Through dozens of interviews, a literature review, policy analysis, and several case studies of existing co-ops, this thesis elucidates present-day challenges and findings, and by discussing peer-city case studies of Vancouver, Canada, and Washington, D.C., proposes viable solutions charting a path forward.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proposal for New Commuter Rail Service and TOD Master Plan Along Guangzhou-Shenzhen Railway</title>
<link href="https://hdl.handle.net/1721.1/152507" rel="alternate"/>
<author>
<name>Pan, Yingu</name>
</author>
<id>https://hdl.handle.net/1721.1/152507</id>
<updated>2023-10-19T03:32:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Proposal for New Commuter Rail Service and TOD Master Plan Along Guangzhou-Shenzhen Railway
Pan, Yingu
The Guangzhou-Shenzhen Suburban Railroad Project seeks to transform regional transportation and urban development within the Greater Bay Area by introducing suburban rail services on the Guangshen Railway (GSR). This proposal outlines a framework centered on investment sectors, innovative methods, public-private collaborations, and public welfare initiatives, all bolstered by the creation of a Joint Development Company. The report high- lights the importance of a people-first approach, station-city integration, and transit-oriented development to deliver a sustainable rail service that positively impacts local communities, businesses, and the environment. Through an analysis of current infrastructure, regional connectivity, and accessibility gaps, the proposal suggests strategies for rejuvenating the GSR and promoting eco- nomic integration. Featuring three case studies that demonstrate the current application of city-station integration and transit-oriented design principles in the region, the paper also provides three station redevelopment proposals complete with comprehensive master plans and urban design schemes that aim to offer insight and an evaluation framework for future research. This the- sis contributes to future research and policymaking by establishing a robust foundation for the sustainable development and integration of cities with sub- urban rail network in the Greater Bay Area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Memorable, Legible, and Accessible Cities: Co-Stewarding Historic Preservation and Public Transportation Agendas in Boston and Hong Kong</title>
<link href="https://hdl.handle.net/1721.1/152506" rel="alternate"/>
<author>
<name>Hasenfratz 柳相宜, Shannon L. X.</name>
</author>
<id>https://hdl.handle.net/1721.1/152506</id>
<updated>2023-10-19T03:56:38Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Memorable, Legible, and Accessible Cities: Co-Stewarding Historic Preservation and Public Transportation Agendas in Boston and Hong Kong
Hasenfratz 柳相宜, Shannon L. X.
This thesis seeks to understand how planners, designers, and policymakers can identify and leverage shared goals between historic preservation and public transit planning to support a memorable, legible, and accessible public realm. Preservation and transportation agendas are often described as inherently opposed to one another, and are generally administered through separate bureaucracies. Rather than being in opposition, I argue that the goals of preservation and transit accessibility are well-aligned through a shared commitment to serving the public interest and fostering sustainable development. I explore this alignment by analyzing how two coastal cities, Boston and Hong Kong, have accommodated transit needs alongside the cultural legacy of their built environments—resulting in positive and negative impacts on achieving sustainable development goals. Insights from Hong Kong and Boston neighborhoods, gleaned through interviews, on-site observations, and mapping exercises, inform a set of opportunities for better fostering the synergies between historic preservation and transit planning. These recommendations, organized around opportunities for collaborative governance structures and processes, seek to improve the usability and enjoyment of public transit system and historic sites to create memorable, legible, and accessible cities for the long-term.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Story of Rubina: Lessons on Self-governance in Peruvian informal settlements and Considerations for Community Land Trusts</title>
<link href="https://hdl.handle.net/1721.1/152505" rel="alternate"/>
<author>
<name>Vila Skrzypek, Flavio</name>
</author>
<id>https://hdl.handle.net/1721.1/152505</id>
<updated>2023-10-19T03:52:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Story of Rubina: Lessons on Self-governance in Peruvian informal settlements and Considerations for Community Land Trusts
Vila Skrzypek, Flavio
Since the 1990s, the Peruvian government has introduced two policies to address informal settlements' property and housing challenges: the formalization titling policy and the certificate of possession policy. Both have caused adverse side effects such as land speculation and land trafficking, respectively. This thesis studies the failure of these past policies and proposes that a new property regime - Community Land Trusts (CLTs) - might be the optimal way to address these property and housing challenges. First, I study why previous property policies failed to intervene in urban informality. Second, I conduct interviews to gather evidence on the self-governance of an informal settlement in Lima and compare it with the core components of different global CLT theories and models. Finally, I intersect both sections to learn about the potential and challenges of establishing a CLT such informal settlement. The implications of this thesis are a set of recommendations and additional research that the Peruvian government should consider when regulating CLTs in Peru.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Acoustic Metamaterials at the Microscale</title>
<link href="https://hdl.handle.net/1721.1/152502" rel="alternate"/>
<author>
<name>Sun, Rachel</name>
</author>
<id>https://hdl.handle.net/1721.1/152502</id>
<updated>2023-10-19T03:04:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Acoustic Metamaterials at the Microscale
Sun, Rachel
Micro-architected materials allow for tunability of extreme static mechanical properties such as stiffness, Poisson’s ratio, or strength. However, dynamic and acoustic properties of micro-architected materials remain largely unexplored, partially because it is challenging to measure their response at these scales. Dispersion resulting from Bragg scattering occurs at wavelengths which are dictated by the characteristic dimensions of the metamaterials, while local resonance remains wavelength-independent. Therefore, micro-architected materials have the potential to allow control of mechanical waves both at high (MHz) and medium-range (kHz) frequencies. &#13;
&#13;
Here, we design, fabricate, and characterize micro-architected materials with tunable mechanical and acoustic properties in the megahertz regime. Using a two-photon lithography prototyping method, we explore the response of a class of architected material morphologies with varied mass distribution, features down to ~1.5 µm, and unit cell sizes of 15 µm. We demonstrate that decoupling mass and stiffness by strategically placing micro-inertia affects the effective stiffness scaling of this class of acoustic metamaterials at the microscale. We present novel measurement techniques for wave velocity of three-dimensional architected materials that employ laser-ultrasonic principles, demonstrating a tunable range of wave velocities around 1000 m/s for different designs in a wide range of relative densities. We then validate their acoustic response numerically with Bloch wave analysis to determine their dispersion relation and rod-wave velocities. Our results provide a baseline to map the tunable acoustic metamaterial design space at the microscale and megahertz regime. These materials could have important implications in acoustic devices in microelectromechanical systems, biomedical imaging, and microscale waveguides.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Tabi. Tabbi. Tabique. Tabby.</title>
<link href="https://hdl.handle.net/1721.1/152501" rel="alternate"/>
<author>
<name>Idowu, Jola</name>
</author>
<id>https://hdl.handle.net/1721.1/152501</id>
<updated>2023-10-19T03:11:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Tabi. Tabbi. Tabique. Tabby.
Idowu, Jola
The uniqueness of tabby is based in its process of collecting local and accessible materials to produce concrete through a non-measured/estimated process. The origin of tabby as either  the North African tabbi, or the Spanish tapia, has long been debated and no conclusive evidence exists that points towards either location. This clouded absence of origin displays the character of tabbi’s history as both rooted and rootless; fluid and based in an intercultural exchange that removes tabby from the confinements of borders and a linear timeline of a beginning, middle, or end. In tabby’s move to the Western Hemisphere, its existence is blurred across socio-cultural divides as a symbol of militaristic power, the plantation economy, and the homes of the slaves who built both. In the United States, tabby was composed of oyster shells sourced from Native American middens: the remnants and discarded materials collected by Native Americans years prior, holding a record of indigenous practices and colonial erasure. The introduction of Portland cement and the end of slavery completely changed the prevalence of tabby which relied on free labor to produce the time intensive process of burning and collecting oyster shells.&#13;
&#13;
However, despite its importance in American building culture, tabby is a material that has faded historically and materially. If one were to happen across a tabby structure today, its former marble like finish will most likely suffer from deterioration due to weather damage and neglect, and the broken walls and floors will reveal the oyster shells beneath. In response, tabby structures across the country are undergoing many different types of preservationist practices, whether that is archaeological digs and recordkeeping, the physical preservation of tabby structures, or the continued use of oysters as a construction material in the American South. &#13;
&#13;
This project proposes a new approach to tabby preservation based on its connection to reuse and its subversion of cycles of capital by the enslaved and indigenous peoples associated with its labor. By archiving everyday practices involving oysters and tabby, I hope to rethink how we orient larger tactics of environmental and material resilience towards the stories and labor of marginalized peoples. In this context, material preservation becomes both a social and physical endeavor through the context of the American South and the shore becomes a place where processes of land, water, and people meet.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Case Study: LIHTC-to-Condo Conversion</title>
<link href="https://hdl.handle.net/1721.1/152499" rel="alternate"/>
<author>
<name>Glasgow, Rebecca</name>
</author>
<id>https://hdl.handle.net/1721.1/152499</id>
<updated>2023-10-19T03:01:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Case Study: LIHTC-to-Condo Conversion
Glasgow, Rebecca
By the end of the decade, approximately half of Low-Income Housing Tax Credit (LIHTC)-funded housing units are anticipated to reach the end of their affordability restrictions. This thesis examines the potential benefits and challenges associated with the transformation of LIHTC rental units into homeownership condominium units through an in-depth case study of Quality Hill Phase IIB, a LIHTC Rental-to-Affordable Condominium project based in Kansas City, Missouri. The case study identifies key regulatory and financial factors that contributed to the model’s initial success. Most significant was the legal theory that the Internal Revenue Service (IRS) has no jurisdiction after the 15-year compliance period and sole jurisdiction lies with the State Housing Finance Agency (SHFA). This predicate was the basis for a private letter ruling granted from the Internal Revenue Service, with participation with the SHFA, that allowed a LIHTC tenant the right of first refusal to buy his or her unit as part of the condominium homeownership plan after year 15 of the compliance period. Despite the model’s initial success, the project grappled with substantial obstacles related to the 2008-2012 financial crisis, recapitalization of the capital partner, lack of end loan financing, and tenant eligibility issues that led to its eventual downfall. Despite these challenges, LIHTCto-condominium conversions hold potential as a strategy for creating affordable homeownership options. The case study provides lessons learned and tools to be applied to its application in a future condominium attempt. These include the use of tax codes sections 108, 183 and IRS Revenue Procedure 2014-12 to tackle feasibility of the model as well as securing mortgage financing from alternative lending institutions that can better accommodate to low-income tenants. In conclusion, this research broadens the academic dialogue on rent-to-own models. By highlighting the primary challenges associated with this approach and offering practical insights, this thesis hopes to provide a valuable resource for stakeholders considering LIHTC for affordable homeownership solutions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Kids Table: A Report Conceptualizing Youth Empowermentand Food Planning Methods Through the Case Study of theMattapan Food and Fitness Coalition</title>
<link href="https://hdl.handle.net/1721.1/152498" rel="alternate"/>
<author>
<name>Fall, Moctar N.</name>
</author>
<id>https://hdl.handle.net/1721.1/152498</id>
<updated>2023-10-19T03:45:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Kids Table: A Report Conceptualizing Youth Empowermentand Food Planning Methods Through the Case Study of theMattapan Food and Fitness Coalition
Fall, Moctar N.
There is an ageless saying directed towards youth (young adults aged 14-19) that continues to define and dictate their lives. Youth are our future. Yet, many governmental and planning institutions overlook the prospect of integrating the voices of youth, particularly of color, within decision-making processes that directly affect them and their communities. Youth should have the power to make key decisions around food security in their lived environments. In this thesis, I reveal the potential impacts youth can have when given adequate support and resources in the planning level - through the prospect of food system and planning.&#13;
&#13;
Building on my former thesis, existing research, case studies, historical analyses and analyzing data from my client partner, the Mattapan Food and Fitness Coalition (MFFC), this thesis: 1. Delves into the history of youth rights and engagement in the United States; 2. Brings to the forefront the tools of food through the analysis of food planning and its empowering attributes in the community; 3. Shows the impact youth have had on their respective community foodscapes with a primary focus on Mattapan and the MFFC; 4. Builds a framework on the crossroads of food planning, youth empowerment and community decision making; and 5. Calls to action institutions of governance and higher education to not only involve youth within urban food system decision making models and designs, but to also support youth and food organizations aimed at improving the landscape and lived environments of their communities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Disaster Diplomacy: The spatial impact of international reconstruction aid in the aftermath of the 2015 Gorkha earthquake in Nepal</title>
<link href="https://hdl.handle.net/1721.1/152497" rel="alternate"/>
<author>
<name>Karmakar, Ipshita</name>
</author>
<id>https://hdl.handle.net/1721.1/152497</id>
<updated>2023-10-19T03:58:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Disaster Diplomacy: The spatial impact of international reconstruction aid in the aftermath of the 2015 Gorkha earthquake in Nepal
Karmakar, Ipshita
This thesis aims to investigate the spatial implications of international reconstruction aid in the aftermath of the 2015 earthquake of Nepal, particularly in the urban municipality of Lalitpur.&#13;
&#13;
I explore how emergency reconstruction aid, operationalized as support from international NGOs, bilateral agencies and multilateral organizations, has a spatial impact and imprint on cities. Particularly, I examine the impact of the aid community on the rent, land values, and infrastructural/amenity distribution within the wards of their operation. Second, I examine the impact of post-earthquake reconstruction projects leveraging international funding on urbanization patterns in the wards in which they are situated. To understand counterfactual trends, I examine the overall patterns of neighborhood externalities in earthquake affected wards of Lalitpur where no international aid funded projects or aid personnel are located.&#13;
&#13;
The argument advanced includes two suppositions that decipher the spatial implications of aid project presence and operational presence: 1) The increasing spatial cluster of physical outposts of international aid organizations’ headquarters, i.e. what I call here their operational presence, creates negative neighborhood externalities ad change that is privileging the rentier class rather than distributing housing, amenities, and infrastructure equitably to the city; 2) The presence of international aid funded reconstruction projects, i.e their project presence, creates a change in both amenities and small business distribution within wards within which they are situated to create neighborhood change, which accelerates inequity, but in ways unlike that of operational presence. I find that two wards within Lalitpur show significant negative neighborhood externality and change due to the presence of international reconstruction aid as opposed to the rest of the municipality i.e. Ward no.2 and Ward no.16. &#13;
&#13;
Particularly, these wards saw an exponential increase in rent and housing values (in the case of Ward no.2), a change in the nature and function of locally owned small businesses, and a tendency to cater to a rentier class that comprises international aid workers and tourists, as opposed to the rest of the municipality (both Ward no.2 and Ward no.16).
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Multifamily Affordable Housing Energy Retrofit Strategy for Richmond, CA</title>
<link href="https://hdl.handle.net/1721.1/152496" rel="alternate"/>
<author>
<name>Gowda, Shivali P.</name>
</author>
<id>https://hdl.handle.net/1721.1/152496</id>
<updated>2023-10-19T03:07:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Multifamily Affordable Housing Energy Retrofit Strategy for Richmond, CA
Gowda, Shivali P.
Weatherization, energy efficiency, and electrification upgrades, which combined can be called energy retrofits, can reduce energy burden, provide health improvements through improved indoor air quality and increased comfort in the home, and reduce greenhouse gas emissions. This study explores how the City of Richmond, CA can incentivize weatherization, energy efficiency, and electrification upgrades as well as solar installation in multifamily affordable housing developments to provide these benefits to low-income residents in the City. Through interviews with energy program administrators, affordable housing providers, community-based organizations, and government agencies, this study identifies the key motivations, opportunities, and challenges of completing multifamily affordable housing energy retrofits in Richmond, CA. In addition, a comprehensive review of existing and upcoming federal, state, and local energy retrofit funding and resources was completed. Based on building permit data and utility payment structure and appliance fuel source survey data from buildings, existing affordable housing developments that are good candidates for electrification and solar installation in Richmond were identified. Utilizing interview findings, literature review, funding information, and building stock analysis, recommendations were created for the City of Richmond of short, medium, and long term programs that could be implemented to increase multifamily affordable housing energy retrofits, with staff capacity, funding requirements, and implementation timeline information included.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Olympic Challenge: Designing Equity into Mega-Events</title>
<link href="https://hdl.handle.net/1721.1/152495" rel="alternate"/>
<author>
<name>Velasquez-Soto, Sharon Jacqueline</name>
</author>
<id>https://hdl.handle.net/1721.1/152495</id>
<updated>2023-10-19T03:03:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Olympic Challenge: Designing Equity into Mega-Events
Velasquez-Soto, Sharon Jacqueline
In 2028, the City of Los Angeles will host the Olympic Games for the third time since the start of the last century; the first and second times being 1932 and 1984. While hosting the Olympics is regarded as a high honor with the potential to bring about significant and lasting benefits, it also presents challenges to the host municipality. Studies of mega-events like the Olympics Games cite place-based challenges such as displacement, gentrification, environmental damage, and lost opportunities to advance equitable development that outlasts the Olympics’ duration. One driver of these place-based challenges - and a manifestation of how communities of color have been left behind during mega-event planning - is the inequitable allocation of opportunities to build wealth, such as through diverse contracting. As such, more explicitly just contracting processes have been identified as one of many avenues that can help address the entrenched racial wealth gap in the United States and better forward equitable economic development through mega-event-induced business.  &#13;
&#13;
This thesis investigates the potential processes and operations entailed in operationalizing equity through diverse procurement as Los Angeles prepares for the 2028 Olympics. Interviews with leaders of the small business community in Los Angeles, a former leader of Exposition Park (one site of the 2028 Olympics), the 2028 Los Angeles Olympic and Paralympic Organizing Committee, and the City of Los Angeles employees confirm that procurement is a major opportunity to forward equity during the 2028 Games. In part, this is because the 2028 Games are billed as a “no-build Olympics,” meaning that the construction of new developments will not apply because Los Angeles already has a wealth of infrastructure. &#13;
&#13;
Borrowing the language of hazard mitigation from environmental planning and a framework for operationalizing equity in planning, this thesis evaluates the potentiality of diverse procurement and contracting in mega-events as a tool to minimize known vulnerabilities, particularly for traditionally marginalized communities, of hosting mega-events like the Olympic Games. The thesis leverages a prime, though imperfect, example of a more inclusive procurement program of the 1996 Olympics in Atlanta to explore lessons learned about diverse procurement and contracting in that city. It concludes with an analysis on what a transfer of best practices by Atlanta could look like in Los Angeles in pursuit of more equitable economic development and what is here termed “economic hazard mitigation” in the planning of mega-events in cities with histories of inequitable urban development.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing (Up)Zoning for Affordability: A Seattle Case Study</title>
<link href="https://hdl.handle.net/1721.1/152494" rel="alternate"/>
<author>
<name>Cameron, Nicholette Paige</name>
</author>
<id>https://hdl.handle.net/1721.1/152494</id>
<updated>2023-10-19T03:43:18Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Implementing (Up)Zoning for Affordability: A Seattle Case Study
Cameron, Nicholette Paige
Today, U.S. cities are continuing to grapple with housing shortages and the affordability crisis. In the United States, as of 2020, 30% of households were cost-burdened and 14% were severely cost-burdened, paying more than 30% and 50% of their incomes on housing, respectively. One way in which cities are attempting to manage growth and affordability is with zoning changes. Cities can encourage new development and increase affordable housing options by loosening restrictions that allow for more density and tying affordability requirements to that new development capacity. This is also known as inclusionary upzoning.&#13;
&#13;
This thesis exists to document the case study of Seattle’s inclusionary upzoning policy, providing just one example of how cities are using zoning reform as a tool to address the affordability crisis. The case is presented as two components: Policy and Practice. The Policy section will provide an overview of the policy from ideation to implementation. First providing what steps were taken to implement both the upzone and the Mandatory Housing Affordability policy, including what buy-in was needed from various stakeholders. Second, it will outline how Seattle’s upzone and Mandatory Housing Affordability changed existing policy and if those changes impacted all neighborhoods equally. Lastly, it will provide a summary of what the policy had accomplished so far.&#13;
&#13;
The Practice section provides one example of how a developer has responded to the upzone. I chose this developer because they are utilizing a unique, community-based model instead of the traditional purchase-to-redevelop business model, which allowed me to explore how the developer is supporting current residents and the community, and what challenges the developer and the community are experiencing as they navigate the upzone and MHA policy.&#13;
&#13;
The thesis concludes with a set of recommendations that Seattle and other municipalities should consider when implementing [up]zoning reform for affordability, including implementing upzones citywide and changing the perspective of the role of communities in the development process.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Active Vibration Suppression for Wafer Transfer Systems in Semiconductor Fabrication Plants</title>
<link href="https://hdl.handle.net/1721.1/152492" rel="alternate"/>
<author>
<name>Qiu, Jiajie</name>
</author>
<id>https://hdl.handle.net/1721.1/152492</id>
<updated>2023-10-19T03:28:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Active Vibration Suppression for Wafer Transfer Systems in Semiconductor Fabrication Plants
Qiu, Jiajie
Vibration suppression is critical in precision mechatronic systems for nanofabrication. In semiconductor plants, automated wafer handling is performed by Overhead Hoist Transport (OHT) vehicles that transport wafers in front opening unified pods (FOUPs). When the wafers are transported in FOUPs, semiconductor chips are at risk of damage by excited small particles due to mechanical vibration, especially if such particles land on the critical area of the wafers. To minimize the vibration excitation force transferred to the FOUP, this thesis focuses on active suppression of the FOUP vibrations to improve the production yield. However, two primary challenges make this problem difficult. First, the OHT vehicle and the FOUP keep traveling, thus the target system is floating with no external anchoring point as a momentum source for control efforts. Second, no sensor attachment is permitted on mass-production FOUPs, which makes feedback control more challenging without measurement. To address the challenges and achieve the goal of reducing FOUP acceleration peaks, an inertia-based counterbalancing system is developed. To validate this system, a customized testbed is built to replicate the acceleration profile of the OHT vehicle in both travel and lateral axes. Additionally, an active vibration suppression system is designed to generate a controllable force on the hand unit. System modeling and identification are conducted using simulation and experiment to identify the system dynamics. Finally, a Disturbance Observer-Based Controller (DOBC) is developed and implemented on the hardware. The experimental results show that the DOBC achieves 38 percent reduction of OHT hand unit vibration and 42 percent reduction of FOUP vibration in the OHT travel direction. Furthermore, the proposed method successfully reduces the multi-axis FOUP-level acceleration peaks, further confirming the effectiveness of the proposed method.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Collaboration in Unlikely Spaces: The Characteristics and Promise of Successful Collaboration Among Affordable Housing and Environmental Conservation Proponents</title>
<link href="https://hdl.handle.net/1721.1/152491" rel="alternate"/>
<author>
<name>Fullem, Abby K.</name>
</author>
<id>https://hdl.handle.net/1721.1/152491</id>
<updated>2023-10-19T03:34:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Collaboration in Unlikely Spaces: The Characteristics and Promise of Successful Collaboration Among Affordable Housing and Environmental Conservation Proponents
Fullem, Abby K.
There is a decreasing amount of available land and competing priorities for the use of it. Land value appreciation and the effects of climate change reduce the amount of viable land at affordable prices. Sectors and stakeholders with contending interests for land parcels have a choice; they can contest the other, ignore the other and try to maximize their interests, or collaborate to maximize both of their interests on that land.&#13;
&#13;
Two sectors that face this choice are affordable housing developer non-profits and conservation land trust non-profits. Both are land-based, in need of inexpensive land, and struggling to achieve their missions alone. Collaboration, I suggest, is the preferred route for these sectors to take in the face of increasing competition, as it allows each sector to simultaneously advance their own interests by leveraging the other sector’s strategies and tools, and form a more powerful political coalition to further their shared interests.&#13;
&#13;
I describe and analyze an action research case study I conducted on a cross-sectoral collaboration in the Hudson Valley of New York State. Hudson Valley Affordable Housing and Conservation Strategy (HVAHCS) is comprised of ten affordable housing and conservation land trust non-profits that are choosing to collaborate in the face of increasing competition. Through a review of consensus building, network building, and collective impact theories, as well as interviews and experience as a member of the HVAHCS facilitation team, I look at what enables their cross-sectoral collaboration, and how they approach obstacles to it. I conclude with recommendations for other groups considering collaboration as a means to advance their individual and shared interests in the same physical space.&#13;
&#13;
Learnings from this action research case study point to the importance of employing an interests-based approach, allowing ideas and priorities to emerge from the network of organizations, balancing capacity and diffused leadership within the collaborative, using a third-party facilitator, prioritizing relationship-building, building a shared understanding, and supporting the organizations within the collaborative.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Repetitive Flooding in Riverine Towns: Understanding Responses, Barriers, and Challenges for the Future</title>
<link href="https://hdl.handle.net/1721.1/152490" rel="alternate"/>
<author>
<name>Campbell, Shaler Rodney</name>
</author>
<id>https://hdl.handle.net/1721.1/152490</id>
<updated>2023-10-19T03:01:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Repetitive Flooding in Riverine Towns: Understanding Responses, Barriers, and Challenges for the Future
Campbell, Shaler Rodney
Climate change is predicted to increase the intensity of precipitation events and increase inland flooding in the United States in the coming decades (Allan et al., 2020; Easterling et al., 2017; Kerlin, 2019; Mallakpour &amp; Villarini, 2015). Unlike coastal communities, which have seen increased attention in the face of climate change, riverine communities have received far less attention (Jongman et al., 2012). This is despite a long history of repetitive riverine flooding and associated responses and barriers to flood mitigation. Important insights can be drawn from towns that have endured repetitive flooding and how they have responded. This thesis explores riverine towns with repetitive flooding, the similarities and differences in their flood responses and barriers to mitigation, similarities that can be deduced for other riverine towns, and how policies may be improved to better support them. To answer these questions, results were compared from semi-structured interviews and historical research from four case study towns in the United States: Harrisburg, Pennsylvania; Freeport, Illinois; Ellicott City, Maryland; and Athens Borough, Pennsylvania. Firstly, results showed several barriers to flood mitigation, including a lack of institutional capacity, challenges with regionalism, and insufficient federal flood mitigation assistance. Secondly, results showed that mitigating flood risk from multiple flood profiles, managed retreat, and structural flood mitigation solutions are proving successful for some riverine towns as flooding events increase in severity. Lastly, results showed that current federal programs must better fully support smaller riverine towns needing funding for flood mitigation, and modifications to existing programs and new programs are necessary to support their unique circumstances. From a resource allocation perspective, this thesis highlights the need to devote more resources to riverine towns with repetitive flooding to help them mitigate the worst effects of flooding in the face of increasingly worse storm events due to climate change.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parameterizing transport maps for ensemble data assimilation</title>
<link href="https://hdl.handle.net/1721.1/152488" rel="alternate"/>
<author>
<name>Sharp, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/152488</id>
<updated>2023-10-19T03:43:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Parameterizing transport maps for ensemble data assimilation
Sharp, Daniel
This thesis discusses methods for Bayesian parameter estimation, particularly in the case of state space models (SSMs). We begin by reviewing established methods for filtering in SSMs, and by examining the graphical model structure of a parameterized SSM. Then we discuss established methods for estimating the parameters of such an SSM, making use of its graphical structure. Next we employ monotone triangular transport maps as a method of estimating conditional probability densities and performing conditional sampling, and relate these tasks to the original filtering problem. We provide some practical results and experiments for employing these maps for inference, particularly examining the map parameterization for this function approximation problem. Using these ingredients, we introduce and discuss an algorithm that uses transport to perform online inference of the static parameters of an SSM, and relate this algorithm to prior methods. Finally, we tie the problems of function approximation and static parameter inference together with numerical examples of transport for sequential inference. &#13;
&#13;
Most of the results in this thesis are powered by two software packages that were developed at length over the course of the thesis work: EnsembleFiltering.jl, written in Julia for performing automatically-differentiable ensemble-based filtering on the CPU and GPU; and MParT, written in C++ for evaluating and training monotone triangular transport maps.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Sort Marker Digitization in Sort Center Operations</title>
<link href="https://hdl.handle.net/1721.1/152487" rel="alternate"/>
<author>
<name>Arellano Martinez, Nayeli</name>
</author>
<id>https://hdl.handle.net/1721.1/152487</id>
<updated>2023-10-19T03:37:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Visual Sort Marker Digitization in Sort Center Operations
Arellano Martinez, Nayeli
Companies worldwide recognize the importance of sustainability and are looking for ways to incorporate sustainable practices into their operations. Companies are looking to become more sustainable by reducing their carbon footprint. It can be done through various means, such as investing in renewable energy sources, implementing energy efficiency measures, and reducing the use of non-renewable resources. Another way companies are looking to become more sustainable is by being more responsible in their supply chain, ensuring that their materials and products are ethically and sustainably sourced, for example.&#13;
&#13;
With the growing awareness of the environmental impact of businesses, consumers are increasingly looking for companies that prioritize sustainability. Amazon, the world's largest online retailer, has announced its commitment to becoming more sustainable. This decision addresses the pressing issue of climate change and reduces the company's environmental footprint. By committing to sustainable practices, Amazon can attract and retain customers who value environmentally friendly products and services. In addition, Amazon's investment in sustainable practices can lead to cost savings in the long run.&#13;
&#13;
One of the many sustainable strategies Amazon is working on is the Small Shipping Label (SSL). This initiative aims to reduce the shipping label size and has a potential entitlement of ~$1 billion per year. Smaller labels facilitate the use of smaller shipping boxes, which ultimately reduces the overall amount of packaging materials required. This reduction in packaging materials, in turn, contributes to a decrease in the carbon footprint associated with transportation. Smaller boxes translate as optimized truck space, since more packages can be shipped in a single trip. As a result, the number of trucks or planes required for delivery is reduced, reducing associated fuel consumption and emissions. SSL implementation requires the removal of a physical Visual Sort Marker (VSM) from the package label. One of the critical manual processes in Middle- Mile operations (Sort Slide) currently relies on physical VSMs to inform sortation decision-making at the package level. Amazon is working towards removing physical VSMs while mitigating any risks to Throughput Per Hour (TPH) and Delivery Estimate Accuracy (DEA). Manual dependencies limit inflight shipment replanning to handle events such as missorts, unpredictable weather conditions, truck breakdowns, etc. Elimination of reliance on physical VSMs will provide the ability to decrease packaging waste, allowing for shipping items in packages smaller than the current 4x6 shipping label, bringing savings on packaging and transportation costs, and aligning with The Climate Pledge.&#13;
&#13;
This thesis looks into the operational challenges of implementing sustainable practices by assessing the trade-offs between sustainability vs. productivity. Its objective is to determine the effect of the short-term proposed solution for VSM removal in the Sort Center network. Specifically in the Sort Slide process capacity and utilization. The present analysis suggests that accepting a modestly degraded process rate may be a viable trade-off if it helps an organization achieve its sustainability goals and ensure the long-term viability of its financial growth.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impacts of Automated Buses on Travel Mode Preference for&#13;
Different Income Groups and Density Areas</title>
<link href="https://hdl.handle.net/1721.1/152486" rel="alternate"/>
<author>
<name>Tang, Ziyi</name>
</author>
<id>https://hdl.handle.net/1721.1/152486</id>
<updated>2023-10-19T03:14:50Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Impacts of Automated Buses on Travel Mode Preference for&#13;
Different Income Groups and Density Areas
Tang, Ziyi
Interest in promoting sustainable transportation continues to rise, and as a result, adopting more equitable, environmentally sustainable mobility is necessary. In the next decades, automated vehicles (AVs) could transform the transport system. Depending on how AVs are adopted, the impacts may differ. The existing literature has demonstrated that using automated buses (ABs) as part of public transit systems shows greater potential for mobility equity and sustainability than single-occupancy AVs. Despite the growing use of automation in the public transit industry, less interest has been given to research on and development of fixed-route ABs than to on-demand AVs.&#13;
&#13;
To fill this gap, this research focuses on fixed-route ABs and evaluates their impacts on transportation equity and sustainability for different income groups and density areas. The study analyzes travel surveys and then simulates the impact of ABs on travel-to-work mode choices. In particular, the research explores the mode preferences of residents in the Metro Boston Area based on the Massachusetts Travel Survey of 2011 and builds mode choice models that incorporate income strata differences. This study introduces ABs as a new mode with lower travel time (via higher frequency services and denser bus networks). The models simulate changes in mode choices in scenarios that provide different AB services to different income and density groups. Finally, this research evaluates scenarios according to four qualities: effectiveness, equity, sustainability, and health.&#13;
&#13;
The results show that the impacts of AB services vary across income and density groups. Providing AB services to low- and middle-income groups living in high- and middle-density areas, and on-demand small automated shuttles in high-density areas, might be the most balanced solution in terms of the four qualities. Overall, this research will support planning and policy decision-making to ensure that emerging AV technology leads to the most equitable and sustainable outcomes.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing the Shared Mobility Market: Dissolving Market Segmentation and Understanding Market Friction</title>
<link href="https://hdl.handle.net/1721.1/152484" rel="alternate"/>
<author>
<name>Guo, Xiaotong</name>
</author>
<id>https://hdl.handle.net/1721.1/152484</id>
<updated>2023-10-19T03:17:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhancing the Shared Mobility Market: Dissolving Market Segmentation and Understanding Market Friction
Guo, Xiaotong
Over the past decade, the growth of ride-sharing companies, also known as Transportation Network Companies (TNCs), providing on-demand transportation services for passengers, has been one of the fastest worldwide. However, in the governance of the shared mobility market of a city or metropolitan area, two conflicting principles emerge: the healthy competition between multiple platforms, such as Uber and Lyft in the United States, and economies of network scale, which leads to higher chances for trips to be matched and thus higher operation efficiency, but which also implies a monopoly. The current shared mobility markets, as observed in different cities in the world, are either monopolistic, or largely segmented by multiple platforms, the latter with significant efficiency loss. &#13;
&#13;
This thesis addresses the efficiency loss issues due to segmentation by proposing new market designs while keeping the competition between platforms. We first propose a theoretical framework for describing shared mobility markets and then propose four market structure designs thereupon. The framework and four designs are first discussed as an abstract model, without losing generality, thus not constrained to any specific city. High-level perspectives and detailed mechanisms for each proposed market structure are both examined. Then, to assess the real-world performance of these market structure designs, we used a ride-sharing simulator with real-world ride-hailing trip data from New York City to simulate. The proposed market designs can reduce the total vehicle-miles traveled (VMT) by 6\% while serving more customers with 8.4\% fewer total number of trips. In the meantime, customers receive better services with an on-average 5.4\% shorter waiting time. &#13;
&#13;
On the other hand, platform drivers in the shared mobility market frequently switch or work for multiple platforms, providing a natural way of dissolving the market segmentation. However, the presence of significant market friction preventing platform drivers from multi-homing has been found in a recent survey distributed in Jakarta, Indonesia. In this thesis, we taxonomize and estimates perceived switching and multi-homing frictions on mobility platforms. Based on a structural model of driver labor supply, we estimate switching and multi-homing costs in a platform duopoly using public and limited high-level survey data in a shared mobility market with a transportation network company duopoly. Estimated costs are sizeable, and reductions in multi-homing and switching costs significantly affect platform market shares and driver welfare. Driver labor supply elasticity with respect to platform wage is also discussed considering both multi-homing and switching frictions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hong Kong Time: Rethinking sustainable mobility and the 15-minute city in the context of equity</title>
<link href="https://hdl.handle.net/1721.1/152483" rel="alternate"/>
<author>
<name>Wang, Elaine</name>
</author>
<id>https://hdl.handle.net/1721.1/152483</id>
<updated>2023-10-19T03:21:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hong Kong Time: Rethinking sustainable mobility and the 15-minute city in the context of equity
Wang, Elaine
Cities around the world are going car-free. With concepts like the 15-minute city, planners and policymakers are investing in more sustainable transit modes: walking, biking, and public transit. Though this shift is critical to reducing emissions, it raises important equity issues that need to be explored. How does the move toward sustainable mobility impact equity? How might it address existing inequality or create new sources of inequality? And how can we ensure an equitable shift to sustainable mobility? This thesis explores these questions, using Hong Kong as a case study. By using spatial analysis, it introduces a Sustainable Mobility Score that quantifies access to urban amenities via sustainable transport modes, like walking and public transit. It then analyzes the relationship between this scoring system and neighborhood income levels. The results show that walkability is linked to spatial segregation, but public transit serves as an equalizer across different neighborhoods. Finally, this thesis discusses the implications of these findings to inform an equitable shift to sustainable mobility.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Housing Supply under Stringent Energy-efficiency Regulations</title>
<link href="https://hdl.handle.net/1721.1/152482" rel="alternate"/>
<author>
<name>Muzio, Maria Jimena</name>
</author>
<id>https://hdl.handle.net/1721.1/152482</id>
<updated>2023-10-19T03:27:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Housing Supply under Stringent Energy-efficiency Regulations
Muzio, Maria Jimena
Massachusetts's commitment to a 50% emissions reduction by 2030 and net-zero emissions by 2050 is reflected in the Green Communities Act of 2008, which requires the adoption of the Stretch Energy Code for every municipality that is designated as a Green Community. This appendix to the base building code adds more stringent energy-efficiency requirements, such as including the HERS Index rating system in every new residential construction. Despite their obvious environmental benefits, more stringent energy-efficiency building regulations can also lead to increased construction costs and negatively impact housing production and affordability. In this study, I investigate the tension in the housing supply resulting from the adoption of the Stretch Energy Code by analyzing municipalities' staggered designation as Green Communities to identify the causal mechanisms behind quantity and price effects in the residential real estate market. The results indicate that more energy-efficient properties command a positive sales price premium and that the Stretch Code adoption is associated with a decrease in the housing quantity and an increase in the average housing prices.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Black Art Planning: Exhibition Manifesto</title>
<link href="https://hdl.handle.net/1721.1/152481" rel="alternate"/>
<author>
<name>Saint Hilaire, Romy</name>
</author>
<id>https://hdl.handle.net/1721.1/152481</id>
<updated>2023-10-19T03:33:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Black Art Planning: Exhibition Manifesto
Saint Hilaire, Romy
Black Art Planning: Exhibition Manifesto, honors the many modes and forms of knowledge that inform Black artists acting as informal planners, designers and urbanists working to harmonize spatial urban realities for marginalized communities. This is a focused introspection of Black liminal realities and how art is used as a tool to challenge, redress and inform the healing of vulnerable communities in the United States. This thesis is in the form of an exhibit showcasing a series of manifesto posters highlighting the key elements of a Black Art Planning framework. Accompanied by a short flm capturing the essence of what has informed this thinking through travel and research in Saint Martin and South Africa. This thesis intends to combine an academic and practice-informed approach to synthesize the phenomena of Black artists and creative collectives cultivating planning solutions through an arts practice in cities across the US and abroad. In highlighting an approach that is intersectional in both the planning feld and the art sector, Black Art Planning is positioned in conjunction with curatorial critique, black critical thought, and city planning pedagogies that inform possibilities for thriving communities through the arts. Essentially exploring who has the right to Art in the city?
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Power Play : An Historiographic about Women and Urban Renewal</title>
<link href="https://hdl.handle.net/1721.1/152480" rel="alternate"/>
<author>
<name>Berendschot, Octavie Eleonor</name>
</author>
<id>https://hdl.handle.net/1721.1/152480</id>
<updated>2023-10-19T03:32:47Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Power Play : An Historiographic about Women and Urban Renewal
Berendschot, Octavie Eleonor
This research will take form of a nonfiction graphic novel that resurfaces the lived experiences of womxn¹ in New York City during mid-twentieth century urban renewal projects. Pathologizing immigrant and nonwhite communities, city officials approved the wholesale demolition of the vibrant neighborhood of downtown Brooklyn by  issuing reports and approving masterplans for public housing. This group of exclusively white men intentionally made these documents opaque as a way to suppress protests and push their political agenda forward. The ongoing preservation of these records as part of the city’s archives ensures that the production of history about urban renewal is constrained by governmental archival practices; which biases histories towards formal participants in exclusionary processes. In contrast, this project seeks to amplify the voices of womxn who lived, worked and passed through these neighborhoods by both leveraging and questioning these archival sources as fragmented evidence of urban histories. This graphic novel explores techniques of representing authorial positionality, especially as it relates to the production of history. To fill the narrative gaps, the creative nonfiction story attempts to humanize neighborhood destruction; it also calls attention to the continuation of oppression and how these histories manifest in the present.&#13;
&#13;
¹Womxn is an intersectional term used to signal the inclusion of those who have traditionally been excluded from white feminist discourse.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Crossroads: Exploring how micro organizations that leverage design shape urbanism practice</title>
<link href="https://hdl.handle.net/1721.1/152479" rel="alternate"/>
<author>
<name>Isidor, Melissa</name>
</author>
<id>https://hdl.handle.net/1721.1/152479</id>
<updated>2023-10-19T04:01:12Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Crossroads: Exploring how micro organizations that leverage design shape urbanism practice
Isidor, Melissa
Crossroads is an exploration into the role micro organizations (1-10 people) that leverage design play within the greater urbanism field. At large, this research serves to build synergies between creative practitioners within or adjacent to the urbanism field, while providing insights and resources both from a philosophical and operational perspective. The research aims to think expansively about the definition of what design means, mainly conceptualizing design as a way of thinking and process. Using a case study approach, my investigation brings together the voices of six micro organizations based in the United States—including BlackSpace Urbanist Collective, JIMA Studio, Broad Community Connections, Design Studio for Social Intervention, Civic Studio, and Hector Design. Each conversation dives into the nuance of each organization’s foundations, process, and vision for the future. In understanding each group’s internal organizational practices, we begin to uncover the possibilities and challenges of practicing at this scale. At large, the findings lead me to believe that such organizations serve as the instigators and experimenters within the greater urbanism ecosystem.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Participatory Zoning: Collectivity, contradictions, and the politics of inclusion in neighborhood planning</title>
<link href="https://hdl.handle.net/1721.1/152478" rel="alternate"/>
<author>
<name>Lee, Gina Hanhee</name>
</author>
<id>https://hdl.handle.net/1721.1/152478</id>
<updated>2023-10-19T03:07:24Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Participatory Zoning: Collectivity, contradictions, and the politics of inclusion in neighborhood planning
Lee, Gina Hanhee
The strength of individual property rights and the sovereignty those rights secure means that even a collective of neighborhood residents successfully engaging with public institutions are highly unlikely to achieve outcomes that undermine private interests in property and profit. Yet, revised attempts at participation continue to be made and the participatory planning paradigm continues to be entrenched. This thesis aims to show that, despite the limited—maybe even predetermined—outcomes of resident participation in land use decision-making, their engagement in such processes generates alternative ways of using land use regulation as tools for spatializing collective survival and even sovereignty. It offers critiques of current participatory planning processes but also reveal incentives for continued resident participation in municipal neighborhood planning decisions.&#13;
&#13;
Through a case study of participatory land use and zoning decisions over five decades in Two Bridges, this thesis analyzes the formations through which residents engage in public processes and asses how property relations and transformations in broader planning contexts structure those engagements. It traces a genealogy of processes and outcomes on one particular site to evaluate the emergences and institutionalizations of participatory formations and the adaptations in representation and modes of participation by shifting local collectivities.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of deep learning to land cover classification: practical issues and strategies</title>
<link href="https://hdl.handle.net/1721.1/152476" rel="alternate"/>
<author>
<name>Fang, Ruoming</name>
</author>
<id>https://hdl.handle.net/1721.1/152476</id>
<updated>2023-10-19T03:48:34Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Application of deep learning to land cover classification: practical issues and strategies
Fang, Ruoming
Land Use and Land Cover (LULC) change is a process of essential importance to urban studies and planning. Large-scale databases provide comprehensive records, but in many circumstances, they need to be supplemented or substituted by alternative data sources. The advent of deep learning provides an efficient, low-cost data generation method in which a trained deep neural network (DNN) segments satellite images to classify land cover; in recent years, multiple models have been proposed and tested on satellite imagery. This study takes a practically oriented approach, in which we train a classic convolutional neural network (CNN) model on a novel labeled image dataset, then use the model to segment Sentinel-2 satellite images and classify the land cover of Massachusetts in 2019. While the model performs very well in classifying land covers at a broad level, the discrepancies between model predictions and reference data increase in distinguishing more nuanced land features due to many localized factors. In addition, model training and classification are highly sensitive to several issues specific to remote sensing data, such as defects in images and distribution shifts. We devise multiple empirical strategies to address these issues, including a progressive technique to select high-quality data samples from the imperfect dataset and the selection of normalization parameters to reduce the impact of covariate shifts. We contend that good models alone are insufficient to drive successful LULC mapping on remote sensing imagery; sound data engineering also plays a crucial role. Lastly, we explore potential improvements in the field that can benefit future applications.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hacer la vida en Ciudad Verde: Bringing Participatory Action Research to Colombia’s Affordable Housing macro-projects</title>
<link href="https://hdl.handle.net/1721.1/152475" rel="alternate"/>
<author>
<name>Pérez Carrillo, Ana María</name>
</author>
<id>https://hdl.handle.net/1721.1/152475</id>
<updated>2023-10-19T03:45:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hacer la vida en Ciudad Verde: Bringing Participatory Action Research to Colombia’s Affordable Housing macro-projects
Pérez Carrillo, Ana María
Housing macro-projects have been central to Colombia’s urbanization tradition over the past century. A new set of laws put forth in the 2010s, and a particular economic chapter in the country’s history have brought a new wave of affordable housing macro-projects over the past decade through public and private cooperation that are characterized by their immensity. Vastness and complexity are fertile ground for an anonymity that can feed loneliness and disconnection. In this urban-suburban immensity, how do the stories, experiences, and voices of residents get heard as they try to contribute to a heated national debate about the future of the housing and urbanization policies that made Ciudad Verde possible? Academic research and urban planning can bring together these voices— containing the joys, pains, hopes, and fears of residents—to the center of the national conversation. Bottom-up, participatory, and action-focused research processes, as attentive to time as they are to space, can help us understand this multiplying new urban form, so big in scale it threatens to overwhelm.&#13;
&#13;
Located on the outskirts of Bogotá, Ciudad Verde, an affordable housing complex that houses over 51.000 households, exemplifies the complexity of these macro-projects in terms of the possibilities that they bring to new residents and the challenges that come with this large-scale fast-paced urbanization. Through developing a Participatory Action Research structure and framework, the Resident Researcher Group of Ciudad Verde collected qualitative data around the experiences of habitation, coexistence, community, belonging and governance that take place for residents of Ciudad Verde.&#13;
&#13;
Through the implementation of a photo-voice process and a civic conversation design process led by 10 resident researchers of Ciudad Verde, our outputs include audiovisual elements that further capture and elevate residents' voices and perspectives. Our hope is that these stories and testimonies will inform decision making for the future of Ciudad Verde, future affordable housing macro-projects in Colombia and the overall Housing Policy scheme that made these projects possible in the first place.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Imagining and building more equitable and democratic systems: lessons from Bay Area organizations</title>
<link href="https://hdl.handle.net/1721.1/152474" rel="alternate"/>
<author>
<name>Mohtadi, Tara</name>
</author>
<id>https://hdl.handle.net/1721.1/152474</id>
<updated>2023-10-19T03:05:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Imagining and building more equitable and democratic systems: lessons from Bay Area organizations
Mohtadi, Tara
America’s democratic system has been built atop politics of exclusion and oppression. While strides have been made in enfranchisement and inclusion, communities continue to be systematically marginalized, dispossessed and disempowered. Processes illuminate the often invisible purpose and values that underlie systems, but as this research discusses, an overemphasis on process as the problem and solution has limited the potential to create substantive change.&#13;
&#13;
To build a true democracy requires both imagining and building alternative political and economic systems that rest on the premise of equity and collective power. Social movements are at the forefront of transforming oppressive systems, and marginalized communities in particular are often on the frontlines of the struggle for justice. Collective and cooperative organizations have emerged within and alongside movements as explicit infrastructures that both embody and support social change. They form to respond to unjust material conditions in their communities related to land, labor, wealth and housing, while simultaneously being embedded in sustained movements, coalition building and policy advocacy efforts to address the root cause of these injustices.&#13;
&#13;
Through numerous conversations with organizations located in the San Francisco Bay Area, this research highlights how systems that foster shared power are not only imaginable, but are being built. In sharing learnings from these organizations, this research tells the story of their challenges and visions, their various approaches to enacting change, and how they are linked to broader networks of mobilization. As microcosms of a truer democracy, collectives and cooperatives have implications for reshaping the relationship between people and power, at the individual, organizational, and societal level. Ultimately, this thesis presents these models as a pathway for transitioning from an extractive to a regenerative economy, and from concentrated to collective power.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Participatory Photo-Mapping (PPM) framework to observe and reflect on the transformation of public space: the case of the Paseo España Environmental Corridor in Bucaramanga, Colombia</title>
<link href="https://hdl.handle.net/1721.1/152473" rel="alternate"/>
<author>
<name>Castillo Castillo, Maria Daniela</name>
</author>
<id>https://hdl.handle.net/1721.1/152473</id>
<updated>2023-10-19T03:41:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Participatory Photo-Mapping (PPM) framework to observe and reflect on the transformation of public space: the case of the Paseo España Environmental Corridor in Bucaramanga, Colombia
Castillo Castillo, Maria Daniela
Public participation in urban planning processes is key to ensuring projects can be successful, can address community needs, and can be sustainable into the future. The current sociodemographic and political circumstances in Colombia, namely the opportunities that the peace agreement between the government and guerrilla groups, and the advances of regulations that help ensure public participation in political processes across the country, have contributed to the increasing need of creating community engagement processes specifically in urban centers that support urban planning decision-making while supporting community development and relationship building. In 2021, Bucaramanga, a capital city of 500,000 inhabitants, developed a Walkable City Plan and an accompanying Revitalization of Public Spaces Plan. These aim to move forward a vision of creating lively public spaces that enable connectivity, sustainable mobility, and ultimately improve citizens’ quality of life. As part of these Plans, Bucaramanga aimed to complete 400 projects by December 2022 for the city’s 400th anniversary. The projects were chosen by the architecture firm TABUU, with their technical and social teams working together on prioritizing the most impactful possible interventions. While there are certain requirements for social engagement these interventions must comply with, there is room to strengthen these strategies by creating more timely, open and transparent processes, by ensuring project assessment and oversight during and after the infrastructural intervention, and by leveraging existing digital tools to democratize information and data. Thus, this thesis reviews the academic literature, official documents, and relevant precedents that can help guide better practices in community engagement processes in Bucaramanga, and explores the opportunity of utilizing a participatory photo-mapping framework to enable spaces to collaborate, exchange knowledge, and develop relevant skills in community planning to continue increasing participation moving forward.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Welcome to Cambodia Town</title>
<link href="https://hdl.handle.net/1721.1/152472" rel="alternate"/>
<author>
<name>Goh, Jonathan Pei-Ying</name>
</author>
<id>https://hdl.handle.net/1721.1/152472</id>
<updated>2023-10-19T03:47:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Welcome to Cambodia Town
Goh, Jonathan Pei-Ying
Cambodian American communities are at an inflection point as the generation that arrived in the U.S as refugees start to retire, and the younger generation often have other aspirations than to carry on their parents’ businesses. Cambodia Town—as the largest conglomeration of Cambodians in the U.S—embodies these changes in the form of population decline, and small businesses closing down. However, a new wave of Cambodian American digital creators that seek to use storytelling and design to represent and shape Khmer culture has also emerged out of this transition.&#13;
&#13;
I undertake a product design and development process that uncovers the needs of Cambodian American small businesses, and digital creators related to digital engagement, and develop a prototype of a mobile application to support them. I conduct exploratory data analysis of small businesses in Cambodia Town, and in-depth interviews with target users of the mobile app, which I translate into the prototype design.&#13;
&#13;
The heart of this work asks how might we imagine a platform that threads together digital and physical worlds for a geographically fragmented group of people, and what are the implications of such an endeavor for placemaking.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How Cars Took Over America</title>
<link href="https://hdl.handle.net/1721.1/152471" rel="alternate"/>
<author>
<name>Strauss, Ilana</name>
</author>
<id>https://hdl.handle.net/1721.1/152471</id>
<updated>2023-10-19T03:53:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How Cars Took Over America
Strauss, Ilana
America has a love affair with the automobile, as the saying goes. The average American household has 1.88 cars. Songs like “Life is a Highway,” “On the Road Again,” and countless others celebrate the car. Americans have accepted endless sprawl, hours stuck in traffic, car crashes, and lung disease because they loved cars from the beginning. Or did they? I will dig into the history of how car-centrism took over the country to explore an alternative theory: what if Americans didn’t choose cars out of love, or even at all? What if a car-centric country was largely forced on Americans, and a narrative of “love” spun after the fact?
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Balancing Accessibility &amp; Affordability in Indonesian Transit-Oriented Development Projects, Case Study: TOD Tanah Abang, Indonesia</title>
<link href="https://hdl.handle.net/1721.1/152470" rel="alternate"/>
<author>
<name>Pratama, Daniel Caesar</name>
</author>
<id>https://hdl.handle.net/1721.1/152470</id>
<updated>2023-10-19T03:44:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Balancing Accessibility &amp; Affordability in Indonesian Transit-Oriented Development Projects, Case Study: TOD Tanah Abang, Indonesia
Pratama, Daniel Caesar
The Transit-Oriented Development (TOD) concept has been hailed for successfully increasing public transit ridership and improving residents’ accessibility. Its approach involves capturing the increase in property values by redeveloping areas surrounding transit stations to fund public transit investment. However, when proposed TOD neighborhoods are already densely populated and home to low-income residents, development-based value capture mechanisms can worsen the housing affordability crisis and increase the risk of gentrification and displacement for existing residents.&#13;
&#13;
This thesis examines the 'Tanah Abang TOD Urban Design Guideline (UDGL),' a newly proposed TOD area in Jakarta proposed by PT MITJ, a joint venture company of Jakarta’s Commuter Line and Jakarta’s Mass Rapid Transit companies. PT MITJ is appointed as a TOD operator responsible for regulating land-use changes and leading the development process. Tanah Abang TOD UDGL then presents an example of how an urban design proposal is used as a mechanism of urban regeneration. By evaluating the proposal's impact on accessibility and affordability compared to the existing state, this thesis aims to provide a framework for anticipatory planning measures that balance potential gains and losses for communities in Indonesian TOD projects.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Determinants and Interventions for Physical Activity Adherence During COVID-19: A Global Study Using Machine Learning Approach</title>
<link href="https://hdl.handle.net/1721.1/152469" rel="alternate"/>
<author>
<name>Chai, Yuchen</name>
</author>
<id>https://hdl.handle.net/1721.1/152469</id>
<updated>2023-10-19T03:43:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Determinants and Interventions for Physical Activity Adherence During COVID-19: A Global Study Using Machine Learning Approach
Chai, Yuchen
Physical activity (PA) is crucial for maintaining both physical and mental health in urban and regional settings. However, public health hazards, such as pandemics, extreme temperatures, and air pollution, pose challenges for PA adherence due to voluntary or mandatory self-protection measures and the closure of exercise facilities in cities. Existing research on urban health resilience during crises primarily depends on small-scale exercise surveys and fails to consider the multifaceted determinants of exercise, including personal habits, social networks, and local policy or built environments. In this project, I use COVID-19 as a case study to systematically investigate the drivers of unequal PA adherence and identify opportunities for timely personalized interventions. First, I collect the universe of exercise records for 30 million individuals across more than 200 countries from Strava. Then, I develop advanced neural network methods to automate the identification of PA adherence prior and during the pandemic based on personal exercise habits and social network interactions, achieving accuracy rates of 89.9% and 82.1% respectively. Lastly, I integrate an explainable neural network approach with econometric analysis to reveal the impact of city-level policies, socio-demographics, and built environment factors on PA inequality. My findings suggest that regions worldwide experience significant PA shocks at the onset of the pandemic, particular during lockdown periods, yet followed by a positive rebound in the long term. Males and urbanites in less developed regions tend to experience more negative PA shocks during the COVID-19, likely moderated by exercise preferences and the availability of outdoor sports amenities. Social connectivity also plays a vital role in promoting PA adherence during crises. This study advances the field by combining large-scale digital data with machine-learning to provide in-time prediction of PA adherence and map its complex determinants. My thesis thus provides direct evidence-based support for multi-layered PA interventions from personal nudges, social networks, and city planning perspectives during public health crises.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-Stitching the Fabric: Urban Highway Removal as anOpportunity for Equitable, Sustainable Transformation</title>
<link href="https://hdl.handle.net/1721.1/152468" rel="alternate"/>
<author>
<name>Boccon-Gibod, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/152468</id>
<updated>2023-10-19T03:59:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Re-Stitching the Fabric: Urban Highway Removal as anOpportunity for Equitable, Sustainable Transformation
Boccon-Gibod, Alexander
The detrimental effects of a century of highway construction and use in U.S. cities are clear. From polluting the air, contributing to climate change, encouraging urban sprawl, and entrenching racial and economic injustice in the built environment, urban highways urgently need reimagining as we aim to build a more just and sustainable society. As a result, cities across the country have slowly begun to remove their highways and undo past harms by reclaiming public space, promoting sustainable modes of transportation, and redeveloping newly available land. While past removal projects have undoubtedly improved their urban public realms, they have often missed opportunities to encourage sustainable mode shift and resist community displacement. Given recent calls for highway removal by communities, local leaders, and the federal government, now is the time to ensure the benefits of these projects are shared by all.&#13;
&#13;
This thesis aims to outline a justice-oriented framework which can encourage more holistic highway removal processes. It first uses a case study approach to evaluate past projects through the lenses of sustainable mobility, public realm, and anti-displacement. Through analyses of the removal of part of the Central Freeway in San Francisco, CA and the Cypress Freeway in Oakland, CA, it identifies best practices to adopt and failures to avoid. It then specifies a set of analytical and procedural dimensions necessary for ensuring more equitable and sustainable outcomes. Finally, this framework is illustrated and tested using a proposed highway removal project: the rest of San Francisco’s Central Freeway.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Strengthening Consumer and Retailer Responsibility for TextileReuse and Donation in Cambridge and Boston</title>
<link href="https://hdl.handle.net/1721.1/152467" rel="alternate"/>
<author>
<name>Lohmar, Sarah</name>
</author>
<id>https://hdl.handle.net/1721.1/152467</id>
<updated>2023-10-19T03:05:24Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Strengthening Consumer and Retailer Responsibility for TextileReuse and Donation in Cambridge and Boston
Lohmar, Sarah
According to the Massachusetts Department of Environmental Protection (Mass DEP), Massachusetts residents dispose of approximately 230,000 tons of textiles each year. In Boston and Cambridge, almost all household trash is incinerated due to at-capacity landfills. This presents a critical need to divert textile waste towards secondary uses, avoiding the release of greenhouse gases and toxins from the incinerated clothes. Following the 2022 Mass DEP ban on disposing mattresses and textiles in municipal trash, there has been an increased emphasis on textile recycling in the two cities. However, existing strategies for textile reuse focus on the actions of individuals and municipalities which is at great odds with the global scale of textile waste generation.&#13;
&#13;
Through data collection, stakeholder interviews, and policy analysis, this work examines relations and roles of the existing textile landscape’s donation, collection, and resale actors spanning both public and private sectors. Drawing from this investigation, I propose a bundle of recommendations to improve the textile recovery space in three key categories: responsibility and stewardship, educational messaging and outreach, and potential policy actions. To effectively address and reduce the issue of textile waste, this work concludes that clothing manufacturers and retailers must take greater responsibility for end-of-life disposal of textiles. At the same time, individual consumers, residents, and cities must be mindful of consumption, continue to participate in existing textile recovery programing, and advocate for longer term change in material waste culture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Bus Operations Using High-Resolution Vehicle Location Data</title>
<link href="https://hdl.handle.net/1721.1/152466" rel="alternate"/>
<author>
<name>Huang, Yuzhu</name>
</author>
<id>https://hdl.handle.net/1721.1/152466</id>
<updated>2023-10-19T03:50:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Bus Operations Using High-Resolution Vehicle Location Data
Huang, Yuzhu
High-resolution location data (heartbeat data) of transit fleet vehicles is a newfound data source for many transit agencies. On its surface, the heartbeat data can provide a wealth of information about all operational details of a recorded transit vehicle trip, from its location trajectory to its speed and acceleration profiles. In reality, the heartbeat data is often noisy and recorded at inconsistent frequencies, making it a challenging task for analysts to interpret the data as is. This thesis delves into the task of extracting useful operational information about bus vehicles from heartbeat data. In particular, the thesis focuses on three aspects of how heartbeat data can be used to enable operational analysis of transit routes. &#13;
&#13;
First, a methodology is proposed to convert the raw, timestamped coordinate data into a continuous and smooth vehicle trajectory function of each bus trip. A case study using historical heartbeat data collected from a real-world bus trip is presented to showcase how a complete trajectory combined with the vehicle speed profile could allow for qualitative assessment of bus operations. Then, details are provided on how one can analyze the trajectories of multiple bus trips in aggregate to quantify the different types of delay encountered by bus vehicles, including stop dwell time, signal delay, crossing delay, and congestion delay. Case studies are presented to demonstrate how one can quantify each type of delay for a specific bus route or corridor served by multiple routes. Lastly, a thorough discussion is carried out about how one can conduct observational before-after studies using heartbeat data to draw conclusions about the effectiveness of transit improvement projects. A case study is provided to illustrate how one can evaluate the effectiveness of a stretch of bus-only lane by calculating the travel time savings due to the project. &#13;
&#13;
The technical discussions presented in this thesis provide a solid foundation for conducting in-depth analysis of bus operations using heartbeat data. The methodologies will allow transit analysts to gain better insight into the performance of transit routes and corridors, thus allowing transit agencies to develop more targeted strategies for continuously improving transit services.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Splitting rides in transit deserts: Ride-splitting dynamics in Chicago before, during and after the pandemic</title>
<link href="https://hdl.handle.net/1721.1/152465" rel="alternate"/>
<author>
<name>Charitatos, Paris</name>
</author>
<id>https://hdl.handle.net/1721.1/152465</id>
<updated>2023-10-19T03:01:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Splitting rides in transit deserts: Ride-splitting dynamics in Chicago before, during and after the pandemic
Charitatos, Paris
Transportation Network Companies (TNCs) might constitute a solution for transit dependent population who live in areas with limited or even non-existent public transit service, also known as “transit deserts”. Ride-splitting was introduced by TNCs as an affordable on-demand mobility option which offers door-to-door service while sharing a trip with another passenger. Due to its affordability, ride-splitting can increase even more the accessibility of low-income and disadvantaged population. Few studies have focused explicitly on the role of ride-splitting in underserved communities. We studied if ride-splitting services compensate for the lack of transit in transit deserts. We leveraged the suspension of ride-splitting services during the COVID-19 pandemic to examine how ride-splitting user behavior changed throughout three time periods: (1) pre pandemic, (2) during the pandemic and (3) post pandemic. By doing so, we study if ride-splitting users switched to single mode during COVID-19 and if ride-splitting levels have recovered in the post-pandemic era. For our analysis we used TNC trip records, provided by the city of Chicago, transit data from four different transit authorities, as well as demographic and job density data. We identified transit deserts by calculating a transit supply score for every census tract during five time periods: (1) weekday daytime hours; (2) weekday overnight hours; (3) weekday peak hours; (4) weekend daytime hours and (5) weekend overnight hours. We developed cluster and bivariate maps along with spatial regression models to determine the correlation between ride-splitting pickups/drop-offs, transit supply and neighborhood characteristics along these five temporal periods. Results revealed that in Chicago low transit supply is not significantly correlated with disadvantaged communities, suggesting that transit deserts can occur regardless of the racial and income composition, and spatial sorting of the area. Pooled pickups/drop-offs were negatively correlated with transit route density, transit stop density and proximity to rail station, which means that ride-splitting supplements the role of transit in transit deserts. We found that communities of color and transit-dependent population had a moderate positive influence on ride-splitting. There is little evidence that ride-splitting users switched to single mode during COVID-19, but overall single trips were relatively higher compared to pre pandemic.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Civic Atlas: Open Government, Civic Tech, and Making Zoning Case Data More Accessible</title>
<link href="https://hdl.handle.net/1721.1/152464" rel="alternate"/>
<author>
<name>Devine, John</name>
</author>
<id>https://hdl.handle.net/1721.1/152464</id>
<updated>2023-10-19T03:25:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Civic Atlas: Open Government, Civic Tech, and Making Zoning Case Data More Accessible
Devine, John
America’s founders believed that access to government records was essential for democracy. This belief was shared by the Obama Administration, which issued orders across the government to make documents more transparent.1 Even though these efforts focused on digitization, many documents are not easy to search or obtain. For example, zoning board meeting agendas typically only exist as pdf documents, making them hard to search by locations and topics of interest. This thesis seeks to understand the accessibility of zoning documents and how feasible it would be to develop an application called “Civic Atlas” that uses web scraping to reformat zoning board meeting agendas into interactive maps and visualizations. To identify the need for this application, the thesis uses an analysis of zoning cases in sixty cities across the United States to determine whether current practices meet the goals of Open Government initiatives. It then evaluates how feasible it is to use automation to extract data from these documents. This analysis revealed three typologies of zoning documents that we use to describe zoning record systems and assess what specific features make them more or less accessible to the general public. The results show that in most American cities, zoning documents are hard to access digitally and that government officials would like products to make them more accessible. However, while improved accessibility is an interest of government officials, they see many barriers to achieving that goal, including significant limitations in staff time and resources.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning by Doing: Transitioning Healthcare Technology Innovations from MIT Labs to Resource-Scarce Communities</title>
<link href="https://hdl.handle.net/1721.1/152463" rel="alternate"/>
<author>
<name>Seabold, Amelia Claire Elston</name>
</author>
<id>https://hdl.handle.net/1721.1/152463</id>
<updated>2023-10-19T03:17:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning by Doing: Transitioning Healthcare Technology Innovations from MIT Labs to Resource-Scarce Communities
Seabold, Amelia Claire Elston
The affordability and accessibility of healthcare innovations is critical for the well-being of resourcescarce communities around the world. Yet little research centers on precisely how and when financial, material, and logistical resource constraints enter the design cycle producing such innovations. MIT labs across engineering and science departments, where novel research on healthcare technologies is strong, offer an ideal environment from which to explore how technological innovations from an academic lab translate into the real world and whether the resource constraints of low-income communities are used as a design input. This study is especially pertinent to my own work in healthcare technology innovation: I am designing and building a low-cost sickle cell disease diagnostic to be used in sub-Saharan Africa where sickle cell disease prevalence is high but there is a lack of diagnoses due in part to the cost of testing. As a student currently designing a product for explicit use in resource-scarce areas, I aimed to learn how MIT faculty, research scientists, and students have designed and implemented their products to be valuable to communities in need. My diagnostic project thus acts as the client project for this thesis. By interviewing women across Africa and Asia about women’s and children’s health in slums, settings of deep and growing income and resource scarcity and inequality, I gained an understanding of the need for accessible and affordable healthcare in areas where my diagnostic would be implemented. Through qualitative interviews with MIT scholars, the thesis explores how and when scarcity on the ground influences work, but also highlights the importance of incorporating the ability to manufacture and distribute new technologies, to consider systemic constraints, and to understand the needs of potential partners and stakeholders in the design of an innovation. Informed by participatory principles and a prioritization of situated knowledge in urban planning, this thesis shows how research and practice can be combined reflexively in the fields of global health and engineering to create a practical and implementable product in an academic lab with impact for some of the most marginalized communities in need of healthcare improvements.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Post-COVID Transit Fares for Riders and Recovery</title>
<link href="https://hdl.handle.net/1721.1/152462" rel="alternate"/>
<author>
<name>O'Neil Jr., Daniel M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152462</id>
<updated>2023-10-19T03:24:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Post-COVID Transit Fares for Riders and Recovery
O'Neil Jr., Daniel M.
In the face of persistent large-scale changes in travel behavior spurred by the COVID19 pandemic, mass transit agencies face a landscape full of new challenges. Transit ridership, often used as a primary measure of agency success, remains diminished. Nevertheless, the purpose of and benefits provided by a well-designed and well-operated transit network remain unchanged. This thesis investigates one powerful tool at the disposal of transit providers: fare policy. Fare policy can be used to spur transit usage, to fund agency operations, and to respond to societal goals. Rider-centric fare policies that simultaneously increase transit travel volumes while showing only small negative fare revenue impacts can be identified. Implementation of such policies is key moving forward to maintain public investment and individual engagement.&#13;
&#13;
This thesis presents four case studies that analyze fare equity, new fare products, and multi-agency regional fare integration. First, fare equity is considered through a case study of Washington, DC’s Metrorail transit fare structure, residential and employment geography, and user demographics. The results highlight policy elements that consistently improve fare equity regardless of structure type, including peak pricing differentiation and removal of penalties for circuitous travel. The second case study designs and evaluates novel fare products using post-pandemic travel patterns on the CTA. The hypothetical products considered differ from traditional offerings by changing the usage restrictions and the validity periods. A flexible pass that confers a set number of CTA journeys at a discounted per-trip price is found to be the most promising, as it would provide the most utility to riders for whom pay-per-use travel is currently the most economical choice. The third case study considers single-day fare capping as an alternative to traditional 1-day passes for transit users in Chicago, identifying benefits to reduced-fare and bus-only riders while providing opportunities to boost agency ridership. Finally, the results of a recently introduced fully-integrated, multi-agency transit pass in the Chicago region are analyzed. Fare structure changes are used to estimate post-COVID commuter rail fare elasticity, and the elasticity for integrated passes. Additional findings include large increases in cross-agency travel, new customers accessing secondary transit agencies, and continued opportunities to integrate.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nature-Based Coastal Adaptation: A Comparative Assessment to Inform Effective Implementation</title>
<link href="https://hdl.handle.net/1721.1/152461" rel="alternate"/>
<author>
<name>Winer-Chan, Rose</name>
</author>
<id>https://hdl.handle.net/1721.1/152461</id>
<updated>2023-10-19T03:06:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Nature-Based Coastal Adaptation: A Comparative Assessment to Inform Effective Implementation
Winer-Chan, Rose
As coastal adaptation planning becomes the new normal, governments have increasingly shifted a significant portion of new infrastructure from hardened “gray” structures toward natural and “nature-based” solutions (NbS): restored or constructed ecosystems that, by enhancing or mimicking natural processes, mitigate coastal hazards while offering socioeconomic, environmental, and public health benefits. However, the use of NbS remains limited due to uncertainty over cost and performance, a fragmented regulatory landscape, inconsistent planning tools, and the context dependence of NbS design. This thesis aims to explore these diverse uncertainties in detail by shedding light on the key factors and processes that may pose critical barriers or drive success during the implementation of nature-based coastal adaptation (NBCA) projects. This study employs stakeholder interviews to explore and compare four NBCA case studies from design through implementation: Hunter’s Point South Park and West Pond in Queens, New York; Rose Larisa Park in East Providence, Rhode Island; and the Sand Motor in South Holland, the Netherlands. By identifying the common challenges, success drivers, and success metrics shared across these projects, this thesis hopes to provide useful early insights that help NBCA decision-makers thoughtfully define and measure success, anticipate key challenges, and take steps to overcome those challenges and achieve more successful implementation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing the Next Generation of Autonomous Underwater Gliders</title>
<link href="https://hdl.handle.net/1721.1/152460" rel="alternate"/>
<author>
<name>Ventola, Peter T.</name>
</author>
<id>https://hdl.handle.net/1721.1/152460</id>
<updated>2023-10-19T03:26:08Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Developing the Next Generation of Autonomous Underwater Gliders
Ventola, Peter T.
This thesis presents a novel, hybrid Autonomous Underwater Glider (AUG) architecture developed for improved performance in shallow, high-current environments while maintaining all capabilities inherent to a deep, 1000m-rated AUG. Numerous regions of scientific interest, such as the marginal ice zone (MIZ) and continental shelf breaks present significant challenges to conventional AUG operations due to a combination of changing ocean currents and depths.&#13;
&#13;
AUGs are traditionally optimized for performance in shallow (less than 200m) or deep water (200m to 1000m) environments. The design of a buoyancy drive on a deep-rated AUG does not support the pump rate required for fast inflections in narrow depth bands.&#13;
&#13;
Contained within this thesis is the framework to expand the operational envelope of a Teledyne Webb Research (TWR) G3 Slocum glider through substantial modification of the glider's hardware components backed by rigorous hydrodynamic analysis and computational fluid dynamics (CFD) modelling. Since AUGs are limited in both speed and maneuverability, the goal of this thesis is to improve and modify the glider's flight characteristics, specifically the glider's speed through water, its inflection rate, and its efficiency. These performance improvements are accomplished through the introduction of a high-power thruster, modified wings, and aft fin surfaces. The modified glider's efficacy is evaluated through various laboratory experiments and field data obtained in Buzzards Bay and the Caribbean Sea. Design concepts for a future, more advanced glider are also discussed.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerated algorithms for constrained optimization and control</title>
<link href="https://hdl.handle.net/1721.1/152459" rel="alternate"/>
<author>
<name>Parashar, Anjali</name>
</author>
<id>https://hdl.handle.net/1721.1/152459</id>
<updated>2023-10-19T04:01:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Accelerated algorithms for constrained optimization and control
Parashar, Anjali
Nonlinear optimization with equality and inequality constraints is a ubiquitous problem in several optimization and control problems in large-scale systems. Ensuring feasibility along with reasonable convergence to optimal solution remains an open and pressing problem in this area. &#13;
&#13;
A class of high-order tuners was recently proposed in adaptive control literature with an effort to lead to accelerated convergence for the case when no constraints are present. In this thesis, we propose a new high-order tuner based algorithm that can&#13;
accommodate the presence of equality and inequality constraints. We leverage the linear dependence in solution space to guarantee that equality constraints are always satisfied. We further ensure feasibility with respect to inequality constraints for the specific case of box constraints by introducing time-varying gains in the high-order tuner while retaining the attractive accelerated convergence properties. Theoretical guarantees pertaining to stability are also provided for time-varying regressors. These theoretical propositions are validated by applying them to several categories of optimization problems, in the form of academic examples, power flow optimization and neural network optimization.&#13;
&#13;
We devote special attention to analyze a special case of neural network optimization, namely, linear neural network training problem, to understand the dynamics of nonconvex optimization governed by gradient flow and provide lyapunov stability guarantees for LNNs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Nature based solutions for coastal defense: Wave attenuation and economic analysis of marsh-fronted seawalls</title>
<link href="https://hdl.handle.net/1721.1/152458" rel="alternate"/>
<author>
<name>Lee, In Him</name>
</author>
<id>https://hdl.handle.net/1721.1/152458</id>
<updated>2023-10-19T03:47:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Nature based solutions for coastal defense: Wave attenuation and economic analysis of marsh-fronted seawalls
Lee, In Him
A seawall fronted by salt marshes is a hybrid, nature-based solution for rural and urban coastal protection. A one-dimensional wave attenuation model was developed using first principles to capture four mechanisms that impact wave evolution: Wave breaking, vegetation drag, shoaling, and bed friction. In particular, the vegetation drag was modeled using the stem and leaf morphology and material properties of specific marsh species. The model was validated with field wave height data. A benefit-cost analysis framework was used to present an economic argument for hybrid infrastructures. The additional wave attenuation from vegetation drag results in cost savings from lower seawall required for the same protection level, in addition to reduced scouring erosion, and the additional ecosystem services, such as habitat, water quality improvement, and carbon sequestration. Both the one-dimensional wave model and benefit-cost analysis framework were applied to an urban marsh-fronted seawall case study at Juniper Cove, Salem, Massachusetts. The presence of vegetation was found to reduce significantly the occurrence of wave breaking, which would be beneficial for sediment accretion to maintain a healthy marsh habitat and a less turbulent aquatic habitat. From the case study, narrow vegetation widths of 20 to 40 m can provide essential wave attenuation that would justify the marsh restoration in front of the existing seawall.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Art, Repair, and Spatial Justice in Boston's Chinatown and Seattle's International District</title>
<link href="https://hdl.handle.net/1721.1/152456" rel="alternate"/>
<author>
<name>Xie, Lilian</name>
</author>
<id>https://hdl.handle.net/1721.1/152456</id>
<updated>2023-10-19T03:15:23Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Art, Repair, and Spatial Justice in Boston's Chinatown and Seattle's International District
Xie, Lilian
There is a growing overlap between the fields of urban planning, art, and social justice. Projects within the realms of urban planning and socially engaged art seek to bring about changes that redistribute socially valued resources and opportunities, especially among racial and spatial lines. This thesis analyzes how socially engaged public art accomplishes these goals of spatial justice in Boston and Seattle’s historic Chinatowns. Building off of planning scholars Rashad Akeem William and Leonie Sandercock’s work framing the role of affect and emotions in healing planning conflicts, I will analyze how these projects support their community’s efforts to repair past spatial harms, and what distinguishes their function from other forms of political and social activism. Using a case study approach, I present a series of research findings from interviews with individuals who facilitated, created, and/or participated in public art projects in Seattle’s International District and Boston’s Chinatown.&#13;
&#13;
Through my research, I illustrate the unique capacity of public art to influence the important emotional and relational aspects of transformation, and the opportunity that public art presents for residents to directly shape the built environment. Public art, as a uniquely place-specific art form, offers an opportunity for communities pursuing spatial justice to shift the affective aspects of transformation and engage in the radial reimagination of how power is distributed in space. Art is an important and often underutilized strategy in the spatial justice toolkit, and this thesis presents opportunities for artists, community organizers, and planners to think creatively about how art can support their efforts to disrupt racial planning, dismantle White supremacy, and support the continued flourishing of urban communities.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Affordable Housing Provision for Workers Constructing Nusantara, the New Capital City of Indonesia</title>
<link href="https://hdl.handle.net/1721.1/152455" rel="alternate"/>
<author>
<name>Prameswari, Pratiwi</name>
</author>
<id>https://hdl.handle.net/1721.1/152455</id>
<updated>2023-10-19T04:00:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Affordable Housing Provision for Workers Constructing Nusantara, the New Capital City of Indonesia
Prameswari, Pratiwi
Indonesia is an important ongoing example of a country relocating its capital city for economic and environmental reasons amid numerous challenges. The new capital city site is located far from existing cities, with limited infrastructure and only a small population. One major challenge entails how and where best to house the large population of construction workers coming to build the city. Learning from global experiences, some new capital cities had thought about providing affordable housing for people but failed to recognize the importance of housing for the construction workers who built the city. As a result, informal settlements have proliferated inside and around the cities, posing challenges for a long time. This thesis explores the efforts in providing affordable housing for construction workers in Nusantara and the challenges that come with ensuring equal access to housing for all, particularly around the aspects of (1) the adequacy of housing for construction workers; (2) stakeholders involved in the provision; (3) procedures of the housing provision. To address the issue of providing accommodation to construction workers in Nusantara, the government of Indonesia has built housing for construction workers called Hunian Pekerja Konstruksi (HPK). However, there is a possibility of quantitative inadequacy of this housing both in the short and long run. The housing is the responsibility of the Nusantara Capital City Authority and Badan Usaha Milik Otorita (BUMO), with the Ministry of Public Works and Housing assisting them in constructing the housing. It is a good step worth appreciating for the Indonesian government to develop housing for construction workers that can lower the possibility of informal settlement. Nevertheless, it is also important to acknowledge some challenges that need to be addressed despite the effort. Keywords: Indonesia, New Capital City, Nusantara, Housing for Construction Workers
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Any Port in a Storm: UK Freeports as a Typology of Governance</title>
<link href="https://hdl.handle.net/1721.1/152454" rel="alternate"/>
<author>
<name>Maddox, Jay</name>
</author>
<id>https://hdl.handle.net/1721.1/152454</id>
<updated>2023-10-19T03:28:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Any Port in a Storm: UK Freeports as a Typology of Governance
Maddox, Jay
In 2019, the United Kingdom announced a new initiative to set up several free trade zones across its four nations. While many analysts have viewed freeports through the lens of the UK's messy divorce from the European Union, it is crucial to understand the policy in the context of the country's regional planning and local development agenda. I argue that the freeport program represents an experimental typology of local governance developed by central authorities. By extending a novel combination of benefits and powers to local governmental bodies, this typology seeks to enable self-propelled “growth” and revitalization that isn’t depended on financial transfers from the central government. In the highly centralized context of England, the freeport governance typology does this by transforming local governmental bodies into empowered economic actors. Far from circumventing central government control, freeports are a centrally guided attempt to create a new form of governance that redefines the role of local and regional authorities. Lastly, I argue that this typology must first be understood as an amalgamation of several regulatory and fiscal features that have been developed over several decades beginning with the election of Margaret Thatcher.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Universities, Communities, and Service-Learning for Urban Development: Rethinking the Work of Kaya Clínica in Maputo, Mozambique</title>
<link href="https://hdl.handle.net/1721.1/152449" rel="alternate"/>
<author>
<name>Mapure, Idélcia Rebeca Domingos</name>
</author>
<id>https://hdl.handle.net/1721.1/152449</id>
<updated>2023-10-19T03:25:00Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Universities, Communities, and Service-Learning for Urban Development: Rethinking the Work of Kaya Clínica in Maputo, Mozambique
Mapure, Idélcia Rebeca Domingos
Urban areas in low-income countries are confronted with major challenges, including poverty, urban deterioration, unemployment, and informality. With the low capacity of local governments to respond to the increasing demands of a growing urban population, anchor institutions are called upon to leverage their permanent strategic positions to contribute to social and economic development in their areas of influence. Universities are distinctive anchor institutions with a strategic position to use their expertise and resources to drive change in the communities in which they operate, mainly for the underserved. However, academic-local community relationships are historically rooted in extractive practices, with little or no contribution to improving local people’s lives. This thesis explores alternatives for building strong and mutually beneficial collaborations between universities and their surrounding neighbors that can effectively create long-lasting community welfare through service-learning. Through service-learning, university students gain valuable experience for their careers, faculty learn to improve their curriculum to match emerging needs and advance their scholarship, and local communities get the support they need to address an issue they lack the expertise or resources to act on independently. &#13;
&#13;
In this thesis, I specifically examine the work of Universidade Eduardo Mondlane (UEM) in the Mozambican capital of Maputo and its relationship with the informal communities of the George Dimitrov neighborhood through a service-learning organization called Kaya Clínica. Kaya Clínica aims to address housing and urbanization challenges in underserved communities to help identify strategies the university can implement to improve its contribution to generating long-lasting welfare for the communities they work with. Through semi-structured interviews and focus group discussions with different people in the neighborhood, university students, and professors, I find that an effective academic-community partnership in this context requires a new paradigm of trust and respect between the university and the communities being studied in order to promote fairness and equality in deliberation, mutual support featuring co-production, dissemination, and to advance the use of knowledge to address real-life needs. More time and dedicated effort are needed to build strong, lasting connections and collaborations between UEM and local communities. This involves active listening, demands effective participation, entails continuing negotiation, and calls for solid win-win strategies to be defined and co-designed from the start.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Coordination Imperative: A Comprehensive Approach to Align Customer Demand and Inventory Management for Superior Customer Experience in Retail</title>
<link href="https://hdl.handle.net/1721.1/152447" rel="alternate"/>
<author>
<name>Kondo, Koichiro</name>
</author>
<author>
<name>Vicente, Ângelo José Bergamaschi</name>
</author>
<id>https://hdl.handle.net/1721.1/152447</id>
<updated>2023-10-19T04:00:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Coordination Imperative: A Comprehensive Approach to Align Customer Demand and Inventory Management for Superior Customer Experience in Retail
Kondo, Koichiro; Vicente, Ângelo José Bergamaschi
The rapid growth of customers traversing different channels during their buying journey presents both opportunities and challenges for organizations. Fragmented decision-making and siloed communication between marketing and supply chain teams can lead to inefficiencies and negatively impact customer experience. This thesis proposes a conceptual framework to align customer demand and inventory management. The framework is examined in the empirical context of the fashion industry, focusing on the US market and insights from Brazil and Japan. By introducing a PDCA (Plan Do Check Action) process and cross-functional metrics, such as NPS (Net Promoter Score) and OTIF (On time in Full), this study seeks to encourage cooperation between departments and coalesce decision-making around enhancing customer experience. The research will explore the quantitative and qualitative aspects of the retail industry focusing on fashion and identify opportunities to leverage technology, marketing, and supply chain management for improved performance. Our study validated the existence of siloed operations and the drawbacks caused by silos in today’s business. Through 16 expert interviews, we identify three key factors that contribute to silos between marketing and supply chain. They are technology fragmentation, lack of integrated KPIs, and complexity of multiple channels. Further, the interviews helped uncover how the experts tackled these challenges in daily operations.&#13;
&#13;
The expected deliverable is a framework that combines analyzed customer journeys with cross-functional metrics to support decision-makers in day-to-day operations. The goal is to deliver a world-class customer experience by aligning decisions to coordinate actions. There is potential to incorporate machine learning to suggest experiments and further optimize value delivery for 4  customers by retailers through multiple channels. Our conceptual framework applies to various businesses struggling with coordination between demand generation and fulfillment.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Simplification of radicals with applications to solving polynomial equations.</title>
<link href="https://hdl.handle.net/1721.1/152394" rel="alternate"/>
<author>
<name>Zippel, R. E.
            (Richard E.),
            1952-</name>
</author>
<id>https://hdl.handle.net/1721.1/152394</id>
<updated>2025-10-30T18:06:08Z</updated>
<published>1977-01-01T00:00:00Z</published>
<summary type="text">Simplification of radicals with applications to solving polynomial equations.
Zippel, R. E.
            (Richard E.),
            1952-
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 1977; Bibliography : leaves 29-30.
</summary>
<dc:date>1977-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Thesis - ReStacks</title>
<link href="https://hdl.handle.net/1721.1/152127" rel="alternate"/>
<author>
<name>Perryman, Benjamin</name>
</author>
<id>https://hdl.handle.net/1721.1/152127</id>
<updated>2023-09-14T03:12:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design Thesis - ReStacks
Perryman, Benjamin
ReStacks is a full-service real estate development company that specializes in the construction and management of modular garages and accessory dwelling units. This design thesis explores how garages and accessory dwelling units can become sustainable infrastructure for housing, transportation, and the sharing economy in the car-dependent urban geography of Columbus, OH.&#13;
&#13;
With growing concern about climate change and household economics, interests sometimes compete. While affordable housing and homeownership have become increasingly inaccessible, construction and the built environment represent over 40% of global fossil fuel emissions. Meanwhile, many existing homeowners can’t afford upgrading their homes and transportation to reduce their contribution to those emissions.&#13;
&#13;
In many Columbus neighborhoods, there are opportunities to address these challenges. Behind single-family homes, there are long-reaching alley systems which are lined with vacant lots and defunct infrastructure. Initially used for the storage of livestock and carriages, many of these alleys fell into disrepair after the introduction of gas-powered cars. Through partnership with homeowners, ReStacks has begun building modular garages and accessory dwelling units to introduce cost-effective and ecologically-sensitive infrastructure for affordable housing, electric vehicles, and the sharing economy; high utility in a small footprint.&#13;
&#13;
ReStacks’ design philosophy was derived from the concept of three-pronged sustainability and its approach involves dissecting the social, economic, and ecological dimensions of intervention to provide more diverse value for consumers, local economies, and the environment. The core principal of its business strategy is to drive marketvalue through overlapping, sustainable benefit. This document will highlight the design progression of ReStacks’ modular garages and accessory dwelling units as well as explore their potential profitability and impact.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optical In-Process Monitoring Tools for Laser Powder Bed Fusion: Verifying Powder Area Coverage of a Layer Setup</title>
<link href="https://hdl.handle.net/1721.1/152122" rel="alternate"/>
<author>
<name>Modes, Jane Ellen</name>
</author>
<id>https://hdl.handle.net/1721.1/152122</id>
<updated>2023-09-14T03:26:40Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Optical In-Process Monitoring Tools for Laser Powder Bed Fusion: Verifying Powder Area Coverage of a Layer Setup
Modes, Jane Ellen
Additive manufacturing (AM) allows for the creation of complex geometries that cannot be created with traditional manufacturing methods and is widely used in many industries. With any additive manufacturing process, achieving a successful Flayer is critical to the quality of the final part. Currently, no machine can provide objective evidence of a proper layer setup with in-process monitoring equipment. The strategy of this project was to utilize various sensors in tandem with the camera available within the machine to distinguish between passing and failing layers in a quantifiable manner. This thesis aimed to test the 3D printer’s on-machine camera and several other off the shelf cameras, (Spectral Instruments RVT100, GoPro HERO7 Black, STPCTOU Wireless Digital Microscope, and iPhone 12 camera,), to determine which if any of them were suitable for quantifying a layer setup through powder area coverage. Several tests were performed to look at the camera repeatability across one or several locations by analyzing image intensity values in ImageJ. Another test was performed to determine if there was a linear correlation between layer thickness and image intensity. The cumulative results from all tests indicate that the on-machine camera is the best option of all cameras tested for this application.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Wireless Sub-Cellular Sized Stimulators for Minimally Invasive Deep Brain Stimulation with High Spatiotemporal Resolution</title>
<link href="https://hdl.handle.net/1721.1/152114" rel="alternate"/>
<author>
<name>Cai, Yubin</name>
</author>
<id>https://hdl.handle.net/1721.1/152114</id>
<updated>2023-09-14T03:23:40Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Wireless Sub-Cellular Sized Stimulators for Minimally Invasive Deep Brain Stimulation with High Spatiotemporal Resolution
Cai, Yubin
Deep brain stimulation (DBS) has become a mainstream treatment for motor disorders associated with neurodegenerative conditions such as Parkinson’s disease (PD). The DBS device, often called the “pacemaker for the brain”, utilizes surgically implanted leads with 4-8 contacts points into the targeting area. The implanted electrodes are then used to deliver high frequency (&gt;85 Hz) electrical stimulation via a pulse generator. In properly selected patients, DBS is proven to be remarkably effective, alleviating motor symptoms that either do not fully respond to medication treatment (such as tremor) or are caused by it (levodopa-induced dyskinesia). However, current DBS technology comes with inherent limitations and problems, including: 1) the need of a large invasive foreign body (the electrode) which can cause lead infections, 2) low coverage of entire movement-related territory in the target nucleus, and 3) adverse side effects such as muscle twitches and sensory complaints caused by diffused current into the tissues.&#13;
&#13;
In this work, we propose to develop a new paradigm of electrical neuromodulation, based on injectable micron-sized stimulator devices, which, once deployed, will allow tunable stimulation of the injected territory. The individual stimulators will produce highly localized stimulation effects, which will minimize current spread to neighboring structures. Since the stimulator devices will be activated from an super-low-frequency (SLF) external magnetic field source, the procedure would not require placement of permanent wired leads in the brain. Additionally, given that a lightweight low-power wearable coil array will power the stimulator devices, a continuous portable DBS treatment of Parkinson's disease will be unprecedentedly made possible.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Geomorphic Concrete : Material and fabrication strategies for heterogeneous concrete morphology</title>
<link href="https://hdl.handle.net/1721.1/152111" rel="alternate"/>
<author>
<name>Kim, Il Hwan</name>
</author>
<id>https://hdl.handle.net/1721.1/152111</id>
<updated>2023-09-14T03:21:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Geomorphic Concrete : Material and fabrication strategies for heterogeneous concrete morphology
Kim, Il Hwan
Given evidence of climate change and the global supply chain crisis, it is no longer viable to continuously exploit nature and expect the global industrial system to remain perpetually dependable. We have to prepare for a world that is not entirely controllable or measurable, which is an inevitable architectural condition of the future. This thesis introduces geomorphic concrete, an alternative design approach and construction methodology closely aligned with geological formation process by incorporating natural forces as collaborators in concrete fabrication.&#13;
&#13;
Geomorphic concrete is an alternate paradigm of material-based design and construction methodology achieved by exploiting the variation in material properties respond to elemental forces. Nature shapes geological formations through a diverse array of materials and natural forces. For example, sedimentary rock’s stratified planes have varied grain, strength, and other characteristics, resulting in unique shapes and patterns through natural processes such as weathering, erosion, and sedimentation. A series of experiments in this thesis demonstrates how to design and construct concrete structures by mimicking the natural geological formation process, instead of relying solely on modernistic geometry-driven design.&#13;
&#13;
This methodology utilizes an injection-printing fabrication technique, inserting reinforcement and suspension materials in liquid concrete to produce cast objects with varying material properties that erode, break, reconfigure, and recover through engagement with natural agents. The thesis showcases three designs that exemplify geomorphic concrete: a material-based structure design by fabricating heterogeneous concrete; a concrete structure printed into granular formwork that erodes due to gravity; and a concrete object that evolves over time by dissolving the injected suspension material. &#13;
&#13;
This thesis contributes to acknowledging geological formation as a ecological process and developing an architectural fabrication concept that embraces elemental forces and material changes as agents in the building process.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Using Displacement Sensors to Characterize Critical Powder Layers in Laser Powder Bed Fusion</title>
<link href="https://hdl.handle.net/1721.1/152108" rel="alternate"/>
<author>
<name>Wittenbrink, Jayna</name>
</author>
<id>https://hdl.handle.net/1721.1/152108</id>
<updated>2023-09-14T03:34:28Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Using Displacement Sensors to Characterize Critical Powder Layers in Laser Powder Bed Fusion
Wittenbrink, Jayna
Additive manufacturing (AM) allows for the creation of complex geometries that cannot be created with traditional manufacturing methods, though process quality tools are still undeveloped. Uniform powder layers are critical to the final part quality for such processes, and currently, no machine can provide objective evidence of a proper powder layer with in-process monitoring equipment. In this project, proper powder layers are currently verified by unquantifiable means. The strategy of this project was to use various sensors in tandem with the camera available within the machine to objectively distinguish between passing and failing powder layers. The specific goal of this thesis was to characterize the actual powder thickness and percent coverage with laser displacement sensors by subtracting a before and after powder deposition scan of the build plate. The laser line scanner showed promising results, but the variation within the process and data alignment strategies used were not sufficient to provide a concrete correlation for powder layer characterization. This project nonetheless sets the groundwork for further work to more objectively characterize powder layers. The rest of the project used the same unquantifiable means that are currently being used to verify powder layers. Intensity values from the onboard camera’s images were able to successfully distinguish between powder layers to be used as a powder layer verification tool.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Construction of a fast-scanning far-infrared Fabry-Perot interferometer.</title>
<link href="https://hdl.handle.net/1721.1/152093" rel="alternate"/>
<author>
<name>Komm, David Serkes.</name>
</author>
<id>https://hdl.handle.net/1721.1/152093</id>
<updated>2025-10-30T17:51:27Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Construction of a fast-scanning far-infrared Fabry-Perot interferometer.
Komm, David Serkes.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Long-span bridge live loads.</title>
<link href="https://hdl.handle.net/1721.1/152089" rel="alternate"/>
<author>
<name>Kram, Norman Simon.</name>
</author>
<id>https://hdl.handle.net/1721.1/152089</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Long-span bridge live loads.
Kram, Norman Simon.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Bibliography: leaves 106-107.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A systematic analysis of PRT, express bus, and rail transit systems.</title>
<link href="https://hdl.handle.net/1721.1/152088" rel="alternate"/>
<author>
<name>Kocur, George.</name>
</author>
<id>https://hdl.handle.net/1721.1/152088</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">A systematic analysis of PRT, express bus, and rail transit systems.
Kocur, George.
Thesis: M.S., Massachusetts Institute of Technology, Department of Civil Engineering, 1973; Number 141 used twice in paging.; Bibliography: leaves 404-407.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design manual for tall stacks.</title>
<link href="https://hdl.handle.net/1721.1/152086" rel="alternate"/>
<author>
<name>Kranz, William Thomson.</name>
</author>
<id>https://hdl.handle.net/1721.1/152086</id>
<updated>2025-10-30T15:50:04Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Design manual for tall stacks.
Kranz, William Thomson.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A study of turbulent flame propagation</title>
<link href="https://hdl.handle.net/1721.1/152082" rel="alternate"/>
<author>
<name>McNutt, Dinah Georgianna.</name>
</author>
<id>https://hdl.handle.net/1721.1/152082</id>
<updated>2025-10-30T18:06:09Z</updated>
<published>1982-01-01T00:00:00Z</published>
<summary type="text">A study of turbulent flame propagation
McNutt, Dinah Georgianna.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1982; Includes bibliographical references.
</summary>
<dc:date>1982-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A sense of touch for a mechanical hand</title>
<link href="https://hdl.handle.net/1721.1/152081" rel="alternate"/>
<author>
<name>Kappl, Joseph J.</name>
</author>
<id>https://hdl.handle.net/1721.1/152081</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1963-01-01T00:00:00Z</published>
<summary type="text">A sense of touch for a mechanical hand
Kappl, Joseph J.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1963; Includes bibliographical references (leaf 28).
</summary>
<dc:date>1963-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Velocity modulation of electromagnetic waves</title>
<link href="https://hdl.handle.net/1721.1/152080" rel="alternate"/>
<author>
<name>Morgenthaler, Frederic R.</name>
</author>
<id>https://hdl.handle.net/1721.1/152080</id>
<updated>2025-10-30T17:03:45Z</updated>
<published>1956-01-01T00:00:00Z</published>
<summary type="text">Velocity modulation of electromagnetic waves
Morgenthaler, Frederic R.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1956; Bibliography: leaves 85-86.
</summary>
<dc:date>1956-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Critical Cartographies of Transnational Infrastructure-led Urbanization</title>
<link href="https://hdl.handle.net/1721.1/152027" rel="alternate"/>
<author>
<name>Shoaib, Jehanzeb</name>
</author>
<id>https://hdl.handle.net/1721.1/152027</id>
<updated>2023-09-01T03:05:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Critical Cartographies of Transnational Infrastructure-led Urbanization
Shoaib, Jehanzeb
This thesis is a manifesto that traverses the binaries of land and sea to mediate between the preconceived notions of boundary and territoriality. The contextual landscape of this mediation is within the littoral territory of Gwadar, in the southern coastal region of Baluchistan, in Pakistan. This port city acts as a gateway to the China-Pakistan Economic Corridor, which, because of its deepsea edge, has been subjected to China’s infrastructure-led urbanization. As a result, the local fishing community – numbering close to 36,000 - and its eco-system have been impacted and displaced, triggering large-scale protests that have been censored by the state-run media. This thesis is thus a manifesto that gives voice to the littoral landscape and the indigenous community, inviting participatory forms of dialogue on the role of design and its agency. At issue here is the conception of Gwadar as an edge on which a highway has been built, restricting the fishing community’s access to the sea. For this community, known as nomads of the sea, Gwadar is not an edge but a gateway to the sea – just as its name implies: an amalgamation of two Balochi words, Guad means wind and dar means gateway, aggregating to mean the gateway of winds. By providing evidence of their territorial claims through critical cartographic methods of ethnography, photography, and mapping, this thesis frames the spatial-temporal thresholds of the littoral which, like the winds, morph with time. The manifesto argues, to view the coastal landscapes as thresholds, rather than mere coastlines. Moreover, it proposes re-learning from the indigenous collectives of rural commons towards creating a subsistent coastal community by circulating a zine pamphlet that legitimizes the claims of the indigenous inhabitants of the littoral landscapes, both human and non-human.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Gaming Like a State: Historical Strategy Game Victoria and "Keyboard Politics" in China</title>
<link href="https://hdl.handle.net/1721.1/152026" rel="alternate"/>
<author>
<name>Wang, Jiaqi</name>
</author>
<id>https://hdl.handle.net/1721.1/152026</id>
<updated>2023-09-01T03:26:06Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Gaming Like a State: Historical Strategy Game Victoria and "Keyboard Politics" in China
Wang, Jiaqi
This thesis explores the role of historical strategy games as a platform for "keyboard politics" in contemporary China, where traditional channels for political expression are tightly controlled. Specifically, the study focuses on Victoria, a video game that allows one to "game like a state" in its simulation of the long-nineteenth-century global history. By examining the game design and analyzing paratexts from the Chinese player community—including forum discussions, game reviews, video recordings, and user-created mods— this research investigates the interaction between the technological affordance and the players, who bring their experiences, memories, and cultural milieu into the game. The thesis further examines how the gameplay in the virtual world reflects and shapes the political tendencies of young Chinese players: grassroots leftism, nationalism, and cynicism behind the "lying-flat" culture. Finally, from this local encounter between a Swedish video game and the Chinese players, the thesis aims to shed light on the global circuit of techno-cultural artifacts beyond a Eurocentric perspective.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Optimization of a D2C Supply Chain Subject to Changing Cost Conditions and Consumer Preferences</title>
<link href="https://hdl.handle.net/1721.1/152025" rel="alternate"/>
<author>
<name>Sarasua, Julie</name>
</author>
<id>https://hdl.handle.net/1721.1/152025</id>
<updated>2023-09-01T03:33:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Network Optimization of a D2C Supply Chain Subject to Changing Cost Conditions and Consumer Preferences
Sarasua, Julie
This thesis examines and models the fulfillment operations of a US, medium-sized, direct-to-consumer (D2C) healthcare distributor that competes with lower-priced alternatives (e.g. Amazon) through high-service and deep customer relationships. Recent inflationary trends have pushed the company to seek new ways to reduce cost. Therefore, this thesis focuses on methods to lower operational cost via network optimization.&#13;
&#13;
This work attempts to solve the network layout problem through two primary approaches: (1) Integer Programming using the Gurobi optimization python package and (2) Scenario Analysis modeling the cost of feasible configurations of the uncapacitated facility layout problem under the company’s existing order allocation logic. Both approaches result in similar solutions with the second deemed more interpretable by leadership and more aligned with existing IT logic in terms of order-facility allocation. Both models are successfully able to show a decrease in total landed cost of fulfillment relative to the base case. Qualitative considerations are discussed, as well as model sensitivity to changing environmental inputs (e.g. population shifts and changes in cost).&#13;
&#13;
A concurrent project examining the reduction of shipping expense by incentivizing subscription-based customers to order less frequently (e.g. consolidating two orders into just one shipment), thereby maintaining revenues while lowering shipping expense. Two proposed solutions are examined: (1) existing incentives and (2) new incentives. The first was tested and showed preliminary positive impact to cost.&#13;
&#13;
The company referenced in this work has been renamed as “DistroCo” for privacy. Sensitive figures, data, and information may be redacted or masked.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Gait Muscle-Reflexes Through Hindlimb Characterization in Rodents</title>
<link href="https://hdl.handle.net/1721.1/152024" rel="alternate"/>
<author>
<name>Guvenilir, Ayse Angela M</name>
</author>
<id>https://hdl.handle.net/1721.1/152024</id>
<updated>2023-09-01T03:01:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Gait Muscle-Reflexes Through Hindlimb Characterization in Rodents
Guvenilir, Ayse Angela M
The complexity of gait modeling ranges from simple models that represent the leg as two linear springs to more complex ones that comprise muscle-tendons and spinal muscle-reflexes. However, the more complex models fail to have a strong empirical basis for muscle-reflex definition and function. To garner stronger evidence for the reflex component of muscle function in gait, we modeled gait muscle-reflexes through experimental limb characterization in rodents. We designed and implemented an animal skin port with multiple electrodes to measure the electromyography, muscle fascicle length, and muscle force. We conducted rat surgeries that implanted this skin port, and used the device to collect in vivo data through various terrain during walking trials. From collaborator’s in vivo data (n = 4 rodents), we implemented multiple linear reflex models with an r² ranging from 0.75 to 0.87 between measured and predicted muscle activations, consistent with predictions from muscle models found in the literature. We also found that the dominant contributor to the reflex in the medial gastrocnemius muscle is a positive force feedback. Future works could explore a similar paradigm in the tibialis anterior muscle, an antagonist to the medial gastrocnemius muscle, and explore higher order nonlinear muscle-reflex models.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Pharmamusicology: Exploring the Impact of Music on the Physiology and Psychology of Anxiety Disorders and Well-Being</title>
<link href="https://hdl.handle.net/1721.1/152023" rel="alternate"/>
<author>
<name>Lecamwasam, Kimaya H.M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152023</id>
<updated>2023-09-01T03:23:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Pharmamusicology: Exploring the Impact of Music on the Physiology and Psychology of Anxiety Disorders and Well-Being
Lecamwasam, Kimaya H.M.
This thesis investigates and assesses the impact of personalized approaches to music-based mental health and well-being support systems grounded in physiology and/or psychology, through analysis of biometric and self-report data. This work is divided into two streams, with four projects classified into the category of “Music as Expression" and one as "Music as Intervention." The first project explores the impact of music composition and performance on self-reported well-being via a "well-being workshop" where participants reported that the music-based activity was engaging and beneficial. The following three projects explored the relationship between live music performance and well-being through data collection during the world premiers of The Distance Between Us, Breathing Together, and the pilot of the Wellbeing Concerts at Carnegie Hall. The Wellbeing Concerts at Carnegie Hall and The Distance Between Us projects yielded novel methods of audience surveyal, such as the "In-Concert Well-Being and Affect Survey (ICWAS)," that were informed by the exploratory findings from the performance of Breathing Together. The pilot data, while limited, demonstrates the promise of these approaches and calls for further study. While composing The Distance Between Us, I also created and used a method of health-informed notation that is included in this thesis, alongside an archival recording of this piece. Finally, the fifth project, titled "Investigating the Physiological and Psychological Effect of an Interactive Musical Interface for Stress and Anxiety Reduction," assesses the utility of music to reduce the physiological and psychological symptoms of anxiety. Pilot results show a significant reduction in self-reported stress, while self-reported anxiety and biometrics highlight further improvements for future protocols. Together, these five projects serve as first steps towards a nuanced understanding of personalized applications of music-based strategies for mental health and well-being promotion and assessment, highlighting important findings and implications for future research and practice.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Parametric Study and Early-Stage Structural Design for Tall Timber Buildings</title>
<link href="https://hdl.handle.net/1721.1/152021" rel="alternate"/>
<author>
<name>Stark, John A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152021</id>
<updated>2023-09-01T03:15:49Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Parametric Study and Early-Stage Structural Design for Tall Timber Buildings
Stark, John A.
Buildings today account for a substantial portion of global greenhouse gas emissions due to the usage of carbon intensive materials such as steel and concrete. An alternative to these materials such as timber, a low embodied carbon material, has been gaining traction in the tall building industry. However, timber structures today have been limited in height due to code restrictions, lack of research and design guidance, cost, and fireproofing. Research has proved that tall timber buildings are feasible but are highly susceptible to large overturning moments and drift due to timber’s light weight and lower stiffness. A solution and trend to create better performing tall buildings with timber has been to design a hybrid structure using a mixture of timber, reinforced concrete, and/or steel in the structural system. Nonetheless, there is a lack of research and guidance taking a wholistic and comparative view into the efficiencies of these different timber structural systems at taller heights. With material quantities determining the economics and efficiency of a tall building, and embodied carbon determining its carbon footprint, this thesis conducts a parametric study to evaluate the efficiencies of multiple tall timber structural systems ranging from 10-50 stories and ultimately creates the first timber premium for height graph. Results show that core and winged wall systems, as well as braced systems, are consistently efficient up to 50 stories for material quantities and embodied carbon. The timber premium for height curve shows that the material quantity of timber required for a safe building design increases linearly when designing for gravity loads, increases linearly when designing for lateral strength, and increases exponentially when designing for lateral serviceability. For gravity loads, this quantity of timber needed is 0.65 cu.ft/sf for 10 stories, linearly increasing to 0.80 cu.ft/sf for 50 stories. For lateral loads, the quantity of timber needed is 0.68 cu.ft/sf for 10 stories, exponentially increasing to 1.15 cu.ft/sf for 50 stories. The premium for height curve also proves that all-timber building designs are controlled by lateral strength up to 20 stories, whereas from 20-50 stories, designs are controlled by lateral drift, meaning a stiffness-controlled design. Timber-hybrid systems can be used for more stiffness, but result in a 3-12% increase in embodied carbon when compared to all-timber options and excluding timber sequestration of carbon. Ultimately, these results can be useful for early-stage structural design considerations for tall timber buildings and help promote a sustainable future.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Study of vdW Magnetic Materials for Spintronic Applications</title>
<link href="https://hdl.handle.net/1721.1/152020" rel="alternate"/>
<author>
<name>Kajale, Shivam Nitin</name>
</author>
<id>https://hdl.handle.net/1721.1/152020</id>
<updated>2023-09-01T03:01:08Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Study of vdW Magnetic Materials for Spintronic Applications
Kajale, Shivam Nitin
Energy consumption of artificial intelligence (AI) systems are projected to grow at an alarming rate over the next two decades and stand to stress the global energy sector. A way forward is to replace the traditional von-Neumann computing hardware with technologies like neuromorphic and stochastic computing which are better suited for AI applications. Here, I study van der Waals magnetic materials for their application in developing spintronic devices to form the building blocks of neuromorphic and stochastic computing architectures. Use of correlated systems like ferromagnets provides a way towards low energy device switching, while 2D nature of the materials provides an avenue for building spintronic devices with maximum dimensional&#13;
scalability and have strong prospects of enabling highly energy efficient mechanism of switching magnetism. A reliable protocol for fabricating devices with air-sensitive vdW magnetic materials and characterising them has been developed, including the electrochemical exfoliation of bulk vdW crystals, the design and building of a 2D material transfer setup and nanofabrication of devices using lithography, and magneto-transport measurements. This work will serve as a strong foundation for future work which would involve developing spin-valve devices with vdW materials and exploring energy-efficient modes of switching magnetism in them.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>EMG methods for prosthesis ankle-subtalar free-space control</title>
<link href="https://hdl.handle.net/1721.1/152019" rel="alternate"/>
<author>
<name>Qiao, Junqing</name>
</author>
<id>https://hdl.handle.net/1721.1/152019</id>
<updated>2023-09-01T03:40:50Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">EMG methods for prosthesis ankle-subtalar free-space control
Qiao, Junqing
EMG-based prosthetic joint controllers have been an active research field for more than fifty years. However, several challenges remain to be addressed[9]. Electrodes positioning, controllers calibration, and controllers’ linear approximation error are the most challenging problems among them.&#13;
&#13;
This thesis introduces three methods to solve those problems respectively. They are 1. A non-negative blind source separation algorithm named non-negative orthogonal decomposition(NOD). This algorithm aims to replace non-negative matrix factorization(NMF) for muscle motion base extraction. NOD recovers the source signal by finding the borders of the input signal and translating the borders onto the coordinate axis. The translated signals are the recovered signals. 2. An unsupervised algorithm for generating joint trajectories from EMG signals in reciprocating movements. The EMG signal and trajectory can be used for EMG-based prosthesis joint controller calibration. And 3. an innovative EMG-to-joint-position controller. It uses neural networks to compensate for the nonlinearity of the well-known bilinear model[2].&#13;
&#13;
The NOD algorithm successfully extracted motion bases from the EMG signals. Compared with NMF, the motion bases are more independent and stable. The minimum-jerk-based trajectory generator generated smooth and biomimetic trajectories on intact subjects. The trajectories are close to the ground truth collected from the goniometer. The third model also has considerable improvement in joint angle accuracy over the linear muscle model.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analyzing Affective Responses to Virtual Spaces Using Physiological Sensors and Verbal Descriptions</title>
<link href="https://hdl.handle.net/1721.1/152018" rel="alternate"/>
<author>
<name>Tu, Han</name>
</author>
<id>https://hdl.handle.net/1721.1/152018</id>
<updated>2023-09-01T03:01:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analyzing Affective Responses to Virtual Spaces Using Physiological Sensors and Verbal Descriptions
Tu, Han
Architects design spaces with assumptions about how their designs will affect users emotionally. These assumptions primarily rely on professional intuitions and subjective experiences. This thesis utilizes wearable sensors to collect and analyze human responses, especially emotions, when experiencing virtual spaces. It tests if the collected data can be used to predict users’ emotional responses to a spatial design in VR and to help architects design in a more informed and data-driven way.&#13;
&#13;
To achieve this, data from 86 individuals in four experiments, each placed in a simulated environment of the synthetic model or scanned model, were analyzed. Collected data include verbal descriptions, recorded visual targets, electroencephalography (EEG), galvanic skin response (EDAs), and heart rates. The study consists of three parts: (1) design and build a VR environment with wearable sensors to collect data; (2) conduct experiments to collect participants’ physiological responses, verbal descriptions, and visual target data in the VR spaces; and (3) analyze the collected data to confirm that they relate to spatial design.&#13;
&#13;
Experiments have demonstrated the existence of a certain relationship between physiological data and spatial parameters, such as EEG calm state and spatial height, and higher vigilance wandering from a relatively tall space to a relatively short space. In addition, by using verbal description analysis, we found an association between the physiological data and the spatial sequences and sounds. This evidence of correlations between physiological data and spatial or verbal description is a small step toward the development of a toolkit to assist designers in measuring user experience in VR environments.&#13;
&#13;
This methodology offers a useful emotion measurement for a virtual architectural design using multiphysiological sensors and verbal descriptions. It leads to a potential future application that combines physiological metrics and AI methods and informs designers of users’ emotional experiences before a design is finished.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Limits of Expression: On Touch, Emotion, and Communication</title>
<link href="https://hdl.handle.net/1721.1/152017" rel="alternate"/>
<author>
<name>Tsogbe, Deborah</name>
</author>
<id>https://hdl.handle.net/1721.1/152017</id>
<updated>2023-09-01T03:25:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Limits of Expression: On Touch, Emotion, and Communication
Tsogbe, Deborah
Touch, being the first sense to develop in the womb, is fundamental to human experience. The tactile sense allows us to investigate the world by providing a framework for understanding it through its relationship to our body. Tactile methods are capable of expressing concepts beyond language. The most effective and meaningful of these expressions are often emotionally charged. They often concern the unspeakable sentiment behind many of our social interactions, the interpretation of which lends a certain depth to our relationships, but beyond this, we often employ self-touch gestures unconsciously or consciously. Through these gestures, we communicate with ourselves – to self-soothe, as a nervous habit, a mindless fidget. Touch expressions can be deployed in countless ways, and we have only begun to understand them. In parallel, we have developed countless methods of expressing ourselves through digital means which subtract some sensory experience from communication. Perhaps the perpetual digital togetherness afforded by the networks we find ourselves living in has dulled our sensitivities to the physical realm of human experience and all that it embodies. As we continue to move further away from physical togetherness, we may lose an understanding of this emotional depth, or lose touch with ourselves. The intention of this research is to marry physical and digital means of communication to understand the unspoken ways in which we are attuned to our inner emotional states and the physical behaviors we use to then express and regulate those states. In this research, I craft a garment embedded with computational means, so that we might develop a methodology for observing how the body understands and expresses itself through touch, and in turn how it communicates with other bodies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Recursive Robotic Assemblers</title>
<link href="https://hdl.handle.net/1721.1/152015" rel="alternate"/>
<author>
<name>Smith, Miana M.</name>
</author>
<id>https://hdl.handle.net/1721.1/152015</id>
<updated>2023-09-01T03:04:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Recursive Robotic Assemblers
Smith, Miana M.
Biology efficiently builds across size scales: at the scale of tens of nanometers, ribosomes assemble more ribosomes, enabling the highly parallelized production of proteins that make up living systems ranging from prokaryotes at the scale of microns, to blue whales at the scale of tens of meters. At a level above ribosomes, we might consider cell division as another type of assembly process: as the size scale of the assembled parts grows, the assemblers also grow. This represents a recursive and hierarchical assembly process. In contrast, current robotic and CNC construction processes, though often parallelized, are constrained to pre-set, limited assembly rates and sizes. Inspired by biology, this thesis considers how we might develop recursive and hierarchical robotic assembly systems. That is, similar to a biological assembly system, can we develop a robotic assembly system that is able to build robots, structures, and robots integrated in structures?&#13;
&#13;
To this end, we decompose both the robot and the structures into a set of compatible building blocks, or voxels, that can assemble and reassemble into more complex structures. The decomposition of the robot is based on a “functional voxel” that routes electrical signals and power, in addition to mechanical forces. Robotic modules are made by incorporating actuation, which then assemble into reconfigurable robots using a reversible solder joint. An additional set of construction voxels, which do not contain electrical features, enables the robot to assemble higher performance structures. This work exists at the intersection of modular robotics and collective robotic construction, prioritizing scalability— our ability to produce many robots that then build useful structures.&#13;
&#13;
A set of functional voxels, robot modules, and construction voxels have been developed and characterized. The robotic system is characterized by its function: the robot is able to assemble another robot and the robot is able to assemble construction voxels into small structures. The construction voxel system is characterized using mechanical testing, which verifies that the material system is performant. Together, this demonstrates all the elements required for recursive robotic assembly, in which a robot is able to assemble both more robots and larger structures.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Discovering, Learning, and Exploiting Visual Cues</title>
<link href="https://hdl.handle.net/1721.1/152014" rel="alternate"/>
<author>
<name>Tiwary, Kushagra</name>
</author>
<id>https://hdl.handle.net/1721.1/152014</id>
<updated>2023-09-01T03:28:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Discovering, Learning, and Exploiting Visual Cues
Tiwary, Kushagra
Animals have evolved over millions of years to exploit the faintest visual cues for perception, navigation, and survival. Complex and intricate vision systems found in animals, such as bee eyes, exploit cues like polarization of light relative to the Sun’s position to navigate and process motion at one three-hundredth of a second. In humans, the evolution of the eyes and the processing of visual cues are also tightly intertwined. Babies develop depth-of-field at 6 months, are often scared of their own shadows, and confuse their reflections with the real world. As the infant matures into an adult, they intuitively learn from their experiences how these cues instead provide valuable hidden information about their environments and can be exploited for depth perception and driving. &#13;
&#13;
Inspired by our usage of visual cues, this thesis explores visual cues in the modern context of data-driven imaging techniques. We first explore how visual cues can be learned from and exploited by combining physics-based forward models with data-driven AI systems. We first map the space of physics-based and data-driven systems and show the future of vision lies in the intersection of both regimes. Next, we show how shadows can be exploited to image and 3D reconstruct the hidden parts of the scene. We then exploit multi-view reflections to convert household objects into radiance-field cameras that can image the world from the object's perspective in 5D. This enables applications of occlusion imaging, beyond field-of-view novel-view synthesis, and depth estimation from objects to their environments. &#13;
&#13;
Finally, we discuss how current approaches rely on humans to design imaging systems that can learn and exploit visual cues. However, as sensing in space, time, and different modalities become ubiquitous, relying on human-designed systems is not sufficient to build complex vision systems. We then propose a technique that combines reinforcement learning with computer vision to automatically learn which cues to exploit to accomplish the task without human intervention. We show how in one such scenario agents can start to automatically learn to use multiple cameras and the triangulation cue to estimate the depth of an unknown object in the scene without access to prior information about the camera, the algorithm, or the object.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Resting State Neurophysiology of Agonist-Antagonist Myoneural Interface in Persons with Transtibial Amputation</title>
<link href="https://hdl.handle.net/1721.1/152009" rel="alternate"/>
<author>
<name>Chicos, Laura A.</name>
</author>
<id>https://hdl.handle.net/1721.1/152009</id>
<updated>2023-09-01T03:17:14Z</updated>
<published>2023-02-01T00:00:00Z</published>
<summary type="text">Resting State Neurophysiology of Agonist-Antagonist Myoneural Interface in Persons with Transtibial Amputation
Chicos, Laura A.
The agonist-antagonist myoneural interface (AMI) is a novel amputation surgery that preserves sensorimotor signaling mechanisms of the central-peripheral nervous systems. Our first neuroimaging study investigating AMI subjects (Srinivasan et al., Sci. Transl. Med. 2020) focused on task-based neural signatures, and showed evidence of proprioceptive feedback to the central nervous system. The study of resting state neural activity helps non-invasively characterize the neural patterns that prime task response. In this first study on resting state fMRI in AMI subjects, we compared resting state functional connectivity in patients with transtibial AMI (n=12) and traditional (n=7) amputations, as well as biologically intact control subjects (n=10). We hypothesized that the AMI surgery will induce functional network reorganization that significantly differs from the traditional amputation surgery and also more closely resembles the neural configuration of controls. We found AMI subjects to have lower connectivity with salience and motor seed regions compared to traditional amputees. Additionally, with connections affected in traditional amputees, AMI subjects exhibited a connectivity pattern more closely resembling controls. Lastly, sensorimotor connectivity in amputee cohorts was significantly associated with phantom sensation (R²=0.7, p=0.0008).  These findings provide researchers and clinicians with a critical mechanistic understanding of the effects of the AMI surgery on the brain at rest, spearheading future research towards improved prosthetic control and embodiment.
</summary>
<dc:date>2023-02-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>996, moyu, and involution: tech work in the age of platform monopoly</title>
<link href="https://hdl.handle.net/1721.1/152005" rel="alternate"/>
<author>
<name>Tan, Jian Shen (JS)</name>
</author>
<id>https://hdl.handle.net/1721.1/152005</id>
<updated>2023-09-01T03:49:44Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">996, moyu, and involution: tech work in the age of platform monopoly
Tan, Jian Shen (JS)
Over the past decade, consumer internet companies such as Google, Facebook, Tencent, and Alibaba have come to symbolize a new era marked by dynamism, entrepreneurialism, and innovation. This has led us to believe that these internet companies play an outsized role in the creation of economic value today, which given their profits, may come as no surprise. However, in this present iteration of capitalist production, value is seldom thought about in terms of labor. How can we incorporate workers—and the labor they perform—into our economic analysis of consumer internet platforms? And what can a labor theory of value reveal about how value is created among these platforms? My research looks at the labor process of China’s increasingly disgruntled tech workers between the years of 2019 and 2022, the years of China’s so-called “internet winter.” Against popular conceptions of China’s elite tech workers—smart, hardworking, and entrepreneurial—my research shows that the labor process of tech work in China during these years is rife with contradiction. Workers stay long hours at the office when there’s no work to do, spend hours writing reports for managers who never read them, and compete ruthlessly against each other when there’s nothing to gain. In other words, they seem to be doing, what the late David Graeber famously called, a bullshit job. By looking at its labor process, my research tells a different story of the consumer internet. Rather than being about dynamism, entrepreneurialism, and innovation, my thesis looks at the bullshitization of tech work and explores why the consumer internet has become bloated with nonsense activity.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Designing Novel DNA-Binding Proteins with Generative Deep Learning</title>
<link href="https://hdl.handle.net/1721.1/152003" rel="alternate"/>
<author>
<name>Calman, Ido</name>
</author>
<id>https://hdl.handle.net/1721.1/152003</id>
<updated>2023-09-01T03:12:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Designing Novel DNA-Binding Proteins with Generative Deep Learning
Calman, Ido
Protein-DNA interactions play a critical role in various biological processes, such as gene regulation and genome maintenance. Designing protein backbones specifically tailored for DNA binding remains a challenging task, requiring the exploration of novel computational approaches. This thesis presents a novel framework for gen- erating protein backbones that exhibit affinity for DNA molecules. The proposed methodology leverages Graph Neural Networks (GNNs) for encoding protein struc- tures and diffusion models for conditional sampling. The GNNs capture the intricate relationships between amino acids in the protein backbone, allowing for the effective encoding of structural information relevant to DNA binding. The diffusion models enable the conditional generation of protein backbones, given specific DNA sequences as input. The thesis proposes a Transformer architecture and provides a practical way to diffuse from its protein encoding. The findings from this research have significant implications for the design and engineering of DNA binding proteins, facilitating ad- vancements in fields such as synthetic biology, gene therapy, and drug development.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Empathic Similarity in Personal Narratives</title>
<link href="https://hdl.handle.net/1721.1/151998" rel="alternate"/>
<author>
<name>Shen, Jocelyn</name>
</author>
<id>https://hdl.handle.net/1721.1/151998</id>
<updated>2023-09-01T03:41:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Empathic Similarity in Personal Narratives
Shen, Jocelyn
The most meaningful connections between people are often formed through expression of shared vulnerability and emotional experiences. Despite the number of ways in which we are able to connect through technology-mediated platforms today, loneliness, apathy, and mental distress are still pervasive around the world. In this thesis, we aim to use NLP systems to humanize personal experiences through identifying similarity in personal narratives based on empathic resonance as opposed to raw semantic or lexical similarity. &#13;
&#13;
We present a novel task for the retrieval of empathically similar stories, as well as the first evaluation benchmark on this task. We operationalize empathic similarity in personal stories using insights from social psychology and narratology, and introduce EmpathicStories, a crowdsourced dataset of emotional personal experiences annotated with features based on our framework and empathic similarity scores between pairs of stories. From our dataset, we provide insights into what features contribute to emotionally resonant stories. &#13;
&#13;
We then compare prompting and fine-tuning large language models (LLMs) for empathic similarity understanding and empathy reasoning summarization. Our experiments show that our model fine-tuned on EmpathicStories achieves performance boosts across both similarity metrics and retrieval metrics compared to state-of-the-art baselines. We additionally conduct a human evaluation to assess the effect our model has on retrieving stories that users empathize with, and compare its performance against naive semantic similarity-based retrieval and ChatGPT generated stories. We find that participants empathized significantly more with stories retrieved by our model than standard, off-the-shelf sentence transformer retrieval. In addition, our user studies show that participants expressed they would empathize much less with AI-written stories than human written stories. Our work sheds light on how LLMs can be used to reason about the interplay of emotions between narrators and can have strong implications for a wide range of other recommendation, generation, and dialogue tasks. In doing so, we demonstrate the potential for social-emotional reasoning in NLP systems to foster prosociality, human connection, and empathy between people.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of Private Equity Investments for Industrial Carbon Emission Reduction</title>
<link href="https://hdl.handle.net/1721.1/151996" rel="alternate"/>
<author>
<name>Jacobson, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/151996</id>
<updated>2023-09-01T03:41:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimization of Private Equity Investments for Industrial Carbon Emission Reduction
Jacobson, Peter
Industrial businesses are responsible for a significant portion of global greenhouse gas emissions. They must reduce these emissions due to financial, regulatory, and customer pressures, but the pathways to net zero emissions are complex and costly. For companies held by private equity, the inaction is exacerbated by a lack of clarity on how emissions reduction initiatives influence investment returns. This research presents a carbon footprint calculator and an optimization model to analyze emissions reduction projects. We highlight that using these two tools as part of a strategic framework can help manufacturing companies create actionable and profitable emission reduction strategies. The carbon footprint calculator identifies a company’s carbon emission sources and measures its carbon footprint, while the optimization model determines the most profitable investments to meet emissions goals. For our optimization, we use an integer linear programming model that schedules which furnace upgrades to implement each year, with an objective of minimizing total cost to the business. Our results highlight that ancillary process line benefits from furnace upgrades can significantly increase profitability of emissions reduction projects. We also show that, in the absence of new technology, there will need to be a combination of cleaner electricity and carbon pricing for manufacturing companies to profitably meet science-based emissions reduction goals.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Liberatory Computing Framework: Empowering High School Students to Mitigate Systemic Oppression through Data Activism</title>
<link href="https://hdl.handle.net/1721.1/151995" rel="alternate"/>
<author>
<name>Walker, Raechel</name>
</author>
<id>https://hdl.handle.net/1721.1/151995</id>
<updated>2023-09-01T03:01:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Liberatory Computing Framework: Empowering High School Students to Mitigate Systemic Oppression through Data Activism
Walker, Raechel
One reason for the underrepresentation of African Americans in the field of computing is the lack of opportunities to engage with data science, particularly in ways that empower their communities.  Current computing curricula do not teach students how to leverage technical skills in service of projects that are more authentic and relevant to the African Americans they are claiming to assist. While computing has the potential to change the world and has become increasingly integrated into our daily lives, the longstanding reality remains that minoritized groups, including African Americans, are underrepresented in computing fields. Moreover, computing classes often present computing as abstract, neutral, and utopian, disregarding its potential for causing harm.&#13;
&#13;
While it is important for everyone to participate in the process of dismantling a complex system of barriers, I focus specifically on why this goal is of particular relevance to African American students. I highlight Dr. El-Amin’s “liberation tools,” which state how a sound racial identity, critical consciousness, liberation centered achievement identity, collective obligation, along with activism skills are essential to preparing African Americans to “fight for” racial liberation. Given that computing classes teach students critical thinking skills to solve complex problems, I argue that computing is well-positioned to incorporate “liberation tools”. Liberation tools teach students how to think in terms of systems, which is essential for racial liberation. By expanding the liberation tools, I coin the term, “liberatory computing,” to reveal how computing curricula can motivate and provide African American students with practical skills to address the racism embedded in society. &#13;
&#13;
I propose two innovative high school curricula that focus on data activism, integrating lessons on racism with the practical application of robust data science skills to support community organizers in their efforts. In the first data activism program, students utilize their data science and social justice skills to address systemic racism through an independent capstone project. They actively engage in conducting background research on specific instances of systemic racism, identifying relevant data sets, and implementing intersectional data analysis techniques. In the second data activism program, students collaborate with community partners to work on a data activism project aimed at supporting minoritized groups in the Greater Boston area. This comprehensive research project encompasses various essential components, such as analyzing student projects, conducting surveys and interviews, and seeking insights from community organizers.&#13;
&#13;
Notably, all community organizers expressed their intention to utilize the students' data activism projects as a valuable resource to enhance their advocacy efforts. For example, one community organization plans to leverage the student's intersectional data visualizations to advocate for policies and laws that address the issue of inland flooding in predominantly African American and low-income communities in Boston. In the second program, surveys indicated a significant increase in the number of students who now acknowledge the impact of data science in combating racism, along with an increased ability to employ their academic achievements to mitigate racial injustices. Furthermore, interviews conducted with students who participated in the second program revealed a unanimous desire to incorporate data activism into their future endeavors. Impressively, twelve out of seventeen students discussed specific ideas on how they plan to utilize data science and social justice principles in their forthcoming pursuits.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Building a Pedagogical Agent that Supports Children’s Exploration and Home Literacy Education</title>
<link href="https://hdl.handle.net/1721.1/151994" rel="alternate"/>
<author>
<name>Zhang, Xiajie</name>
</author>
<id>https://hdl.handle.net/1721.1/151994</id>
<updated>2023-09-01T03:31:49Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Towards Building a Pedagogical Agent that Supports Children’s Exploration and Home Literacy Education
Zhang, Xiajie
Early childhood education remains a critical topic and challenge in the past decades due to its impact on children’s future. Since the late 90s, early childhood and developmental psychology researchers have started promoting a new child-centered, exploration-and-play-focused early childhood education method. Despite the research findings, most countries still employ early childhood curricula focusing on school readiness and competency in kindergarten. Although, there is a new tendency toward using a holistic approach to early childhood education in institutions to promote child-focused learning and exploration. To support children’s exploration and self-promoted learning outside school, pedagogical agents are an under-studied platform. This thesis investigates the possibility of using a pedagogical AI agent to help children’s exploration at home.&#13;
&#13;
I first describe the design and development of an interactive storybook platform with explorable literacy features, through which children are endowed with the resources to learn by themselves. I then describe the robot’s behavior design for the exploration demonstration with a social robot platform. Later, I discuss two different robot interaction paradigms for the delivery of the demonstration behavior.&#13;
&#13;
I evaluate the system with 35 children and a between-group ABA study design. Participants interacted with the robot for 2 to 4 weeks and completed 8 sessions in total. In one of the study groups, children’s exploration was self-guided with their agency to decide when to interact with the robot peer. The robot re-actively delivered demonstration behaviors in response to the child’s initiation. In the other study group, the robot’s behaviors were driven by its personalization algorithm; thus, it autonomously delivered interactions without the child’s initiation.&#13;
&#13;
The data analyses were conducted on two scales–children’s self-explorative behaviors and vocabulary learning. The result shows that with a proactive robot peer with exploration demonstrations, children adapted to be more explorative than children who interacted with the reactive robot peer, despite that the robot demonstrated exploration in both conditions. Moreover, we find that children’s exploration is associated with their learning in the robot-guided exploration condition, suggesting that children’s self-explorative behavior in the robot-guided group is learning-oriented and related to their learning growth. When comparing children’s adaptation of exploration, we find a ceiling effect common in child-robot interaction–when children exhibited high exploration in the early phases of the intervention, their exploration growth in succeeding intervention sessions is less compared with less explorative children. Finally, we find an association between children’s exploration and the storybook genre, possibly due to their familiarity with the storybook genre and engagement.&#13;
&#13;
In addition, this thesis attempts to understand the educational needs of home literacy programs from parents’ perspectives. After living with the pedagogical social robot for several weeks, the experimenters conducted semi-constructed interviews with the parent. The qualitative analysis and coding of parents’ interview transcripts suggest common themes in robot design and educational functions that the parents want in a long-term pedagogical agent for their home.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>PLACEIFY: A data-driven framework for evaluation-by-analogy in early-stage urban analysis and design</title>
<link href="https://hdl.handle.net/1721.1/151993" rel="alternate"/>
<author>
<name>Sanatani, Rohit Priyadarshi</name>
</author>
<id>https://hdl.handle.net/1721.1/151993</id>
<updated>2023-09-01T03:30:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">PLACEIFY: A data-driven framework for evaluation-by-analogy in early-stage urban analysis and design
Sanatani, Rohit Priyadarshi
Within the field of urban design and planning, the explicit parameterization of many complex aspects of urban environments is a challenge. Specifically, the representation of ‘intangible’ experiential and affective qualities is often difficult, which makes the quantitative evaluation of such qualities problematic. Owing to these challenges, representation and evaluation though reference has long remained an essential component in design processes. Through case-studies of existing environments, designers often attempt to represent, convey and evaluate complex qualities of their own designed outcomes. However, there exist very few frameworks for systematic and data-driven referencing in contemporary urban design workflows.&#13;
&#13;
Building on advances in urban information systems, big data analytics, and computer vision, this research demonstrates a data-driven framework for evaluation-by-analogy, that allows urban designers, planners and analysts to systematically reference real urban environments based on designed or envisioned urban qualities. The system generates a database of diverse urban locations across different geographical and cultural contexts around the world. A data collection pipeline is created for the extraction of selected visual, morphological, land-use and demographic features for each sample, from geolocated street-view imagery, Geographic Information System (GIS) data, landuse records and census data. For design exploration and evaluation, the system offers novel interfaces and representation structures that allow users to explore ‘similar’ samples based on envisioned urban qualities. It also allows for reference-based scenario building through explicit modification of urban parameters, as well as through other forms of reference exploration. For demonstration of the framework, a prototype web-application titled ‘PLACEIFY’ is developed. Usability-tests involving urban designers and planners indicate that such a system has strong potential to serve as a valuable decision support tool, by providing relevant data at each iteration of an imagination-modification-evaluation cycle in design.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Innovation Process at Omnichannel DCs Undergoing Shifts in Channel Mix</title>
<link href="https://hdl.handle.net/1721.1/151992" rel="alternate"/>
<author>
<name>Gouthro, Fiona</name>
</author>
<id>https://hdl.handle.net/1721.1/151992</id>
<updated>2023-09-01T03:41:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Innovation Process at Omnichannel DCs Undergoing Shifts in Channel Mix
Gouthro, Fiona
An increase in digital demand has forced a DC that previously fulfilled mostly wholesale orders to adjust its operational strategy and pursue innovation to address the shift in channel mix. This thesis aims to develop and propose an innovation framework that can be used to identify the areas in the DC that are constrained and what technologies can be used to tackle these constraints. The proposed process is based on external innovation methods identified through research and internal innovation methods present in the company today. Once developed, the proposed innovation process is evaluated through the application of a previous DC innovation to ensure viability. The proposed process is then applied to the DC today to recommend areas and technologies for innovation investment. It is concluded that the proposed process performs well when applied to a previous innovation and is therefore deemed as viable. When applied to the DC today, the process recommends an investment in the DC’s footwear selection area using process and staffing optimization.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Between the Lines: Encoding Relations Through Body, Tool, and Algorithm</title>
<link href="https://hdl.handle.net/1721.1/151991" rel="alternate"/>
<author>
<name>Schumacher, Zachary Steven</name>
</author>
<id>https://hdl.handle.net/1721.1/151991</id>
<updated>2023-09-01T03:58:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Between the Lines: Encoding Relations Through Body, Tool, and Algorithm
Schumacher, Zachary Steven
The tools architects use orchestrate the discipline in seen and unseen ways. In recent decades, we have swapped early forms of mechanical drawing instruments for digital tools with unimaginable computing power. While this increased level of computational literacy allows us to script and code architectural forms more efficiently, it has also created incongruities between the computationally described object and material constructions. At times the digital tools we depend on today go as far as defining the aesthetic of our buildings. To complicate this further, the digital tools most often solicited by the architectural practice are non-native imports adapted for their visual potential and practical uses. Meaning embedded within the programming of tools that shape our buildings are residual values of other disciplines. For example, we can trace the origins of CAD software back to engineers and mathematicians at Boeing and here at MIT, who sought to mechanize the construction of splines and irregular curved surfaces for the production of slipstream automobiles, toothbrushes, and even letterforms. And much like the hidden algorithms in the background of our digital tools, there is an apparatus of choreography surrounding our physical tools that encode instructions on how the body engages with the object. In other words, the machines we use produce not only drawings but gestures as well, keying us into the always-present yet rarely discussed embodied dimensions of tools. &#13;
&#13;
To expand upon the embodied dimensions of our tools today, we need to reconsider the machine as the site of intervention. Motion data and performance envelopes surrounding our tools extend beyond the projective reenactment of the machine and offer us a means to measure the derivative of what it takes to produce a drawing, a surface, or a construction. This thesis dislocates the spline from its formal geometry associated with slipstream construction and recasts it as a way to record the tumble-type inscriptions surrounding an object’s performance — a tactic to mutually mark and negotiate the activity between humans and machines.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>ColloGraphy: Designing Augmented Visual-Haptic Feedback Systems to Support Fine Motor Skill Learning</title>
<link href="https://hdl.handle.net/1721.1/151989" rel="alternate"/>
<author>
<name>Fang, Mengying</name>
</author>
<id>https://hdl.handle.net/1721.1/151989</id>
<updated>2023-09-01T03:19:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">ColloGraphy: Designing Augmented Visual-Haptic Feedback Systems to Support Fine Motor Skill Learning
Fang, Mengying
Learning motor skills is an essential part of our daily lives. However, certain activities can be challenging to comprehend through observation alone, especially for beginners. Specifically, this thesis focuses on Chinese calligraphy writing as an example of a complex, fine motor task, in which learning it can be augmented by additional feedback. The traditional method of learning calligraphy writing involves learners comparing the visual differences in their writing with static expert manuscripts. In this process, the opportunity for novice learners to internalize the bodily sensations required for mastery is absent. To address this issue, this thesis presents the design of a series of prototypes that capture, recreate, and re-enact bodily movement during calligraphy writing. These prototypes support novice learners with augmented visual and haptic feedback. The thesis presents a comprehensive comparison of the three different approaches explored in the prototypes, highlighting the importance and challenge of finding the right amount of system intervention to achieve effective motor skill learning; in short, inconsistent or excessive intervention may lead to confusion or over-reliance, while insufficient intervention may fail to assist learning. Accordingly, design recommendations for successful multimodal feedback systems for supporting motor skill learning are recommended.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A conformable ultrasound patch for cavitation enhanced transdermal cosmeceutical delivery</title>
<link href="https://hdl.handle.net/1721.1/151988" rel="alternate"/>
<author>
<name>Shah, Aastha</name>
</author>
<id>https://hdl.handle.net/1721.1/151988</id>
<updated>2023-09-01T03:10:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A conformable ultrasound patch for cavitation enhanced transdermal cosmeceutical delivery
Shah, Aastha
Increased consumer interest in healthy-looking skin demands a safe and effective method to increase transdermal absorption of innovative therapeutic cosmeceuticals. However, permeation of small-molecule drugs is limited by the innate barrier function of the stratum corneum. Here, we report a conformable ultrasound patch (cUSP) that enhances transdermal transport of niacinamide by inducing intermediate-frequency sonophoresis in the fluid coupling medium between the patch and the skin. The cUSP consists of piezoelectric transducers embedded in a soft elastomer to create localized cavitation pockets (0.8 cm², 1 mm deep) over larger areas of conformal contact (20 cm²). Multiphysics simulation models, acoustic spectrum analysis and high-speed videography are used to characterize transducer deflection, acoustic pressure fields and resulting cavitation bubble dynamics in the coupling medium. The final system demonstrates a 26.2-fold enhancement in niacinamide transport in a porcine model in vitro with a 10-minute ultrasound application, demonstrating suitability of the device for short-exposure, large-area application of sonophoresis for patients and consumers suffering from skin conditions and premature skin aging.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Materials Characterization and Spectroscopy for a Methane Abatement Catalyst</title>
<link href="https://hdl.handle.net/1721.1/151987" rel="alternate"/>
<author>
<name>Wilkinson, Mollie</name>
</author>
<id>https://hdl.handle.net/1721.1/151987</id>
<updated>2023-09-01T03:29:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Materials Characterization and Spectroscopy for a Methane Abatement Catalyst
Wilkinson, Mollie
Methane is the second-most emitted greenhouse gas after carbon dioxide, and it is significantly more powerful as a short-term warmer, making it a valuable target for climate change mitigation efforts. Zeolites are earth-abundant minerals common in catalysis for their low price combined with high conversion and throughput potential. This study evaluates a specific copper-zeolite (mordenite) methane oxidation catalyst for long-term durability and potential performance at 400 and 950 C. Using materials characterization and spectroscopy techniques including scanning-electron microscopy (SEM), energy-dispersive X-ray spectroscopy (EDS), Brunauer-Emmett-Teller analysis (BET), differential scanning calorimetry (DSC), and X-ray diffraction (XRD), chemical and structural changes are tracked, identified, and assessed over the course of three months. Samples treated at 400 °C show no major structural or chemical changes in the catalyst, while samples treated at 950 °C show gradual transformation into a nonporous quartz-mullite-cristobalite mixture. This suggests indefinite catalyst stability at the former temperature and progressive catalyst degradation at the latter temperature, providing plausible long-term operation conditions and peak temporary conditions for this method of methane abatement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Latent Lab: Exploration Beyond Search and Synthesis</title>
<link href="https://hdl.handle.net/1721.1/151986" rel="alternate"/>
<author>
<name>Dunnell, Kevin F.</name>
</author>
<id>https://hdl.handle.net/1721.1/151986</id>
<updated>2023-09-01T03:23:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Latent Lab: Exploration Beyond Search and Synthesis
Dunnell, Kevin F.
This Master’s thesis investigates the potential of artificial intelligence (AI) models, particularly machine learning and natural language processing techniques, to facilitate brainstorming and ideation in the invention process. The thesis centers around the iterative development of “Latent Lab,” an interactive tool for exploring relationships among MIT Media Lab research projects. The work offers insights into AI systems as co-inventors by addressing the challenges of organizing, searching, and synthesizing content. Our method for interacting with the material is based on “exploration” rather than search. The primary objective was to create a human-AI co-invention system and evaluate its performance on the novelty of co-created ideas. However, the research underscored the importance of accurate data organization for meaningful data generation. Consequently, later versions of Latent Lab focused primarily on improving data organization and interactive exploration. The tool’s success was measured by its effectiveness in familiarizing users with research projects at the Media Lab, ultimately laying the foundation for the future development of human-AI co-invention systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>When War Becomes Peace: Ruination and Transvaluation in the Hiroshima and Nagasaki Peace Memorial Parks</title>
<link href="https://hdl.handle.net/1721.1/151985" rel="alternate"/>
<author>
<name>Shirokawa, Nanase</name>
</author>
<id>https://hdl.handle.net/1721.1/151985</id>
<updated>2023-09-01T03:32:16Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">When War Becomes Peace: Ruination and Transvaluation in the Hiroshima and Nagasaki Peace Memorial Parks
Shirokawa, Nanase
In postwar Japan, “peace” has become the memorial scaffolding that structures the collective national orientation towards the legacy of the Asia-Pacific War, in large part owing to the devastating bombings of Hiroshima and Nagasaki. Yet the atomic catastrophes endured by the two cities have become subsumed into what Anne McClintock terms the “administration of forgetting.” The traumas associated with the bombs have been construed in Japan as an experience of national victimhood and a moral lesson for humanity, in the process obfuscating histories of imperial terror that I argue are carried forward in significant formal continuities, transvalued in a discourse of peace. Peace, in this regard, becomes a mode for asserting a clean rupture and justifying political amnesia.&#13;
&#13;
Peace is the directive of the memorial landscapes of Hiroshima and Nagasaki, and peacemaking was the process by which ruination became the pretext for social, political, and urban reinvention. The Hiroshima and Nagasaki Peace Memorial Parks, both unveiled in 1955, manifest the ways in which dominant public discourses of peace-making and nuclear remembrance were actualized through the reconstruction of the post-atomic cities.&#13;
&#13;
The processes behind the making of the two parks and their approaches to remembering atomic violence trouble the perception that the memorials are shaped solely by the circumstances of the bomb and the postwar milieu of liberal democracy. These sites, I argue, are intimately informed by a constellation of transwar aspirations. wartime representational practices, bureaucratic tensions, as well as urban and regional histories that span beyond the moment of 1945.&#13;
&#13;
In its dual focus on the spatial narratives of Tange Kenzō’s plan for Hiroshima and the material and bodily politics of Kitamura Seibō’s Peace Statue in Nagasaki, this study also addresses the persistent marginalization of Nagasaki in the discourse of nuclear disaster. A close study of these two sites makes evident the need to take seriously the transmutation and transvaluation of representational modes across shifting regimes. The threat of historical forgetting emerges not only in the absences and forced silences, but also in the adoption of a passive gaze towards our extant memorial infrastructure.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Platforms for Biological Control</title>
<link href="https://hdl.handle.net/1721.1/151984" rel="alternate"/>
<author>
<name>Gretton, Dana</name>
</author>
<id>https://hdl.handle.net/1721.1/151984</id>
<updated>2023-09-01T03:40:26Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Platforms for Biological Control
Gretton, Dana
Innovation at the level of platforms provides the greatest leverage for shaping science and technology. Here, two key weaknesses in platforms for biology are addressed: the lack of accessible, extensible open-source software for the exploding field of robotic bioautomation, and the lack of a standardized, consistent screening platform for the manufacture of dangerous DNA. Pyhamilton and SecureDNA are introduced. Pyhamilton is the first open-source Python package for controlling Hamilton liquid-handling robots for biology. SecureDNA is the only DNA screening concept to prioritize anti-proliferation of pathogen genomes, and the first to employ modern cryptography to secure a global screening system that can keep up with anticipated exponential growth of the DNA synthesis market. For each platform developed, a software implementation is provided and exercised in a range of applications, and hardware demonstrators have been produced.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Automated Design of Machine Perception Systems</title>
<link href="https://hdl.handle.net/1721.1/151981" rel="alternate"/>
<author>
<name>Klinghoffer, Tzofi</name>
</author>
<id>https://hdl.handle.net/1721.1/151981</id>
<updated>2023-09-01T03:25:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards Automated Design of Machine Perception Systems
Klinghoffer, Tzofi
Animal's visual perception systems have evolved to their environment over billions of years, enabling them to navigate, avoid predators, and hunt prey. In contrast, machine perception systems designed by humans require significant engineering and often use standard cameras that may not be well suited to their task or environment. Consider building a robot to pick up trash. The choice of robot sensors impacts which type of trash it can detect, e.g. perhaps an infrared sensor is needed to detect plastic bottles. In addition, animals are able to understand their environment from different viewpoints and under variable lighting, while machine perception systems often fail to generalize beyond the distribution of training data. Inspired by the evolution of animal's visual perception systems, this thesis explores two distinct but related problems: (1) automated design of machine perception systems, and (2) robustness of machine perception systems to physical phenomena, such as lighting and camera viewpoint. Machine perception systems -- also referred to as imaging systems in this thesis -- consist of cameras and perception models. Cameras are used to sense the environment and capture observations, while perception models are used to analyze captured observations. Cameras contain (1) illumination sources, (2) optical elements, and (3) sensors, while perception models use (4) algorithms. Directly searching over all combinations of these four building blocks to design a machine perception system is challenging due to the size of the search space. In Part I of this thesis, we introduce DISeR: Designing Imaging Systems with Reinforcement Learning, a method that allows task-specific imaging systems to be created and optimized in simulation. In Part II of this thesis, we study the robustness of machine perception systems to physical phenomena. We introduce two methods to mitigate the susceptibility of deep learning models to failure when exposed to out of distribution lighting and camera viewpoints. The first method uses disentanglement of features to improve robustness, while the second method modifies pixels to improve robustness. We evaluate our work using standard benchmarks and peer-reviewed publication.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>SenseMate: An AI-Based Platform to Support Qualitative Coding</title>
<link href="https://hdl.handle.net/1721.1/151980" rel="alternate"/>
<author>
<name>Overney, Cassandra</name>
</author>
<id>https://hdl.handle.net/1721.1/151980</id>
<updated>2023-09-01T04:00:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">SenseMate: An AI-Based Platform to Support Qualitative Coding
Overney, Cassandra
Unstructured data can be analyzed numerically or qualitatively through methods like sensemaking. One of the key stages of sensemaking is qualitative coding, where the data is divided into units, and each unit is assigned a category or code. Unfortunately, coding is tedious and time-consuming when carried out manually. Finding a balance between manual and fully-automated coding can help increase efficiency while allowing human judgment and preventing systematic machine errors. In this thesis, I propose an accessible semi-automated approach to qualitative coding. First, I apply a novel machine learning method, rationale extraction models, to qualitative coding. These models recommend themes for each unit of analysis in qualitative data and tend to perform better with less ambiguous themes. Through an online experiment, I find that assistance from rationale extraction models increases coding performance and reliability. Next, I execute an iterative, human-centered design process to create SenseMate, an AI-based platform for qualitative coding. After 13 user testing sessions and 3 design iterations, I observe that model overreliance can be minimized through cognitive forcing functions and easy-to-understand model explanations. I also design several ways for users to efficiently provide feedback on machine-generated rationales. To connect my model and design evaluations, I implement a prototype of SenseMate and conduct a summative user evaluation through an online experiment. The evaluation reveals that participants with access to AI assistance have higher coding performances but spend more time on the platform. The effectiveness of various design decisions within SenseMate is also explored. Finally, I discuss a myriad of future work possibilities. Overall, this thesis offers a practical and accessible solution to analyzing unstructured data, which has broad applications for researchers and organizations across various fields.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generation of focal Depdc5 knockout mouse model and implications for focal epilepsy</title>
<link href="https://hdl.handle.net/1721.1/151978" rel="alternate"/>
<author>
<name>Groff, Karenna J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151978</id>
<updated>2023-09-01T03:55:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Generation of focal Depdc5 knockout mouse model and implications for focal epilepsy
Groff, Karenna J.
Epilepsy is a neurological disorder that impacts more than 65 million individuals globally, and one in every 200 children. DEPDC5 is the most commonly identified gene associated with familial focal epilepsy and malformations of cortical development. It is also associated with an increased risk of Sudden-Unexplained Death in Epilepsy (SUDEP). It remains unknown whether seizures due to DEPDC5 loss are a result of in utero cortical developmental defects or later neuronal dysfunction of mTORC1 signaling. To test this, we developed a postnatal adeno-associated virus (AAV) mediated focal cortical Depdc5 knockout mouse model. Viral vectors containing either 2/8AAV-Cre or control 2/8AAV-GFP were injected into the unilateral motor cortex of postnatal day zero or day one Depdc5 floxed (Depdc5c/c or Depdc5c/-) mouse pups. We confirmed a significant reduction in DEPDC5 levels and increased mTOR activity in the AAV-Cre injected hemisphere compared to the contralateral hemisphere or control AAV-GFP injected mice. Cortical lamination was not disrupted by AAV-Cre or AAV-GFP injection. Focal Depdc5 knockout mice have lowered seizure thresholds and increased mortality from seizures. Acute fasting is protective against seizures in a DEPDC5-dependent manner, which is facilitated by the control hemisphere of focal Depdc5 knockout mice. Focal Depdc5 knockout mice have increased cortical thickness, increased cortical neuron size and dysplastic neurons throughout the cortex, similar to the abnormal neurons seen in human focal cortical dysplasia specimens. Glial abnormalities in the Depdc5 knockout region are identified, such as hypomyelination, reactive astrogliosis, and microglial activation. Our focal Depdc5 knockout mouse model recapitulates clinical, pathological, and biochemical features of human DEPDC5-related epilepsy and brain malformations. Our study reveals that postnatal DEPDC5 loss without disruption of cortical migration is sufficient to cause epilepsy and SUDEP. Restoration of DEPDC5 function via gene therapy represents a viable treatment approach.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Walk Deserts</title>
<link href="https://hdl.handle.net/1721.1/151974" rel="alternate"/>
<author>
<name>Blinder, Justin</name>
</author>
<id>https://hdl.handle.net/1721.1/151974</id>
<updated>2023-09-01T03:35:22Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Walk Deserts
Blinder, Justin
This thesis describes a new methodology to identify, measure, and understand “Walk Deserts.” This methodology comprises a system for identifying, mapping, and visualizing areas that are ostensibly highly walkable places according to traditional criteria and indicators, but that are also plagued with invisible environmental factors that impede walkability and threaten public health. This research has two principal aims: (1) to better understand and begin to address blind spots in walkability indicators as well as perception-based measurements (that are dicult to quantify and subject to bias), and (2) to present a greater range of environmental data associated with walkability and negative health outcomes in publicly accessible ways in order to facilitate community engagement. Two key contributions emerged from this research: (1) a theoretical re-definition of the concept of “walk deserts” to highlight typically overlooked aspects of walkability, and (2) a creative and technical contribution that focuses on finding “walkable deserts in the City” and visualizing these deserts in immersive ways. Boston’s Chinatown district serves as a case study site, a “walk desert” hidden in plain sight. With the presence of greenways surrounded by highways, it appears to be a seemingly walkable and even heavily touristed neighborhood with dramatically poor health outcomes. Digital photogrammetry is used to explore how merging photorealistic, three-dimensional spatial models with environmental data can produce immersive and interactive data visualizations, including a web application, an augmented reality interface, and an interactive installation. These interfaces expose the “walk desert” hidden in Chinatown, and provide a mechanism to engage members of the community, as well as researchers and policy-makers, in the process of transforming degraded urban spaces into healthier and move vibrant ones.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Motor Design and Control for Scalable Distributed Actuation</title>
<link href="https://hdl.handle.net/1721.1/151973" rel="alternate"/>
<author>
<name>Preiss, David</name>
</author>
<id>https://hdl.handle.net/1721.1/151973</id>
<updated>2023-09-01T03:26:57Z</updated>
<published>2022-09-01T00:00:00Z</published>
<summary type="text">Motor Design and Control for Scalable Distributed Actuation
Preiss, David
Machine design is often constrained to a limited number of controllable degrees of freedom due to the cost and complexity associated with integrating large numbers of actuators. This thesis explores the hardware and control development for a motor architecture designed for distributed actuation, where many controllable degrees of freedom are required across macro scale structures. Availability of a low cost and easily integrated actuator at these scales would open new regimes for fields such as robotics, manufacturing, human computer interaction and wireless communication.&#13;
&#13;
A survey of prior distributed actuation research is conducted, including shape memory alloy, piezoelectric, hydraulic, and electric motor topologies. A new approach using a multiplexed two-phase axial flux PCB motor is designed and iterated on through empirical testing and simulation. These motors are integrated into a modular 64 actuator array, and a proof of concept is built capable of distributed linear motion for interpolation of a surface or as independent degrees of freedom. The prototype achieves 21&#120583;m linear resolution over 45mm of stroke, with a 1.9N stall force, and a density of 104 actuators per square foot. Motor commutation is achieved through multiplexing of individual motor windings, allowing for sub-linear cost and component count scaling. Actuator performance over a number of performance parameters is addressed, including output torque, speed, mass, resolution, range of motion, as well as parameters critical to scalability including motor footprint, cost and power consumption. Finally two applications of serially distributed actuation are discussed, including the design of modular continuum robots from a discrete toolkit of structural elements, as well as a serpentine actuator with many controlled degrees of freedom.
</summary>
<dc:date>2022-09-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanical Shock Analysis and Testing of an Air-Dropped Antarctic Ice Penetrator</title>
<link href="https://hdl.handle.net/1721.1/151972" rel="alternate"/>
<author>
<name>Brown Jr., Michael James</name>
</author>
<id>https://hdl.handle.net/1721.1/151972</id>
<updated>2023-09-01T03:02:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mechanical Shock Analysis and Testing of an Air-Dropped Antarctic Ice Penetrator
Brown Jr., Michael James
The Seismo-Geodetic Ice Penetrator (SGIP) is an air-dropped kinetic penetrator that will deposit a geophysics-grade seismometer and GPS receiver into Antarctic ice shelves. These instruments will measure vibrations in the infragravity (&lt; 0.1 Hz) range to improve understanding of forces that cause ice shelf calving. The penetrator will impact at its terminal velocity---roughly 40 m/s---to deploy the seismometer at least 2 meters below the ice shelf surface. SGIP will separate into two components, the body and flare, on impact to embed the seismometer into the ice shelf and transmit data from the ice shelf surface, respectively. However, the penetrator's impact can accelerate the primary payload up to 129 g along its central axis, which can damage the delicate seismometer. SGIP uses shock isolation to reduce the seismometer's peak acceleration. A structural response model is used to predict the seismometer's dynamic response to ice shelf impacts with hundreds of potential shock isolation designs. This structural response model is used to select a candidate shock isolation design based on the SGIP prototype's volume constraints. A 22 cm long, 4.8 cm diameter cylinder made of IMPAXX 300 foam can be used to limit the seismometer's peak axial accelerations to 55 g, which represents a 57\% reduction from the maximum expected peak axial acceleration. A shear pin assembly is designed to rigidly connect the body and flare during descent yet deliberately shear on impact to separate the body and flare. Risk reduction tests are conducted to lower the probabilities of the shear pins' four primary failure modes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>AI Enhanced Reasoning: Augmenting Human Critical Thinking with AI Systems</title>
<link href="https://hdl.handle.net/1721.1/151971" rel="alternate"/>
<author>
<name>Danry, Valdemar M.</name>
</author>
<id>https://hdl.handle.net/1721.1/151971</id>
<updated>2023-09-01T03:21:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">AI Enhanced Reasoning: Augmenting Human Critical Thinking with AI Systems
Danry, Valdemar M.
The pursuit of knowledge and understanding has been a driving force for humanity since the beginning of time. This relentless quest for reason has shaped the world as we know it, enabling us to unlock secrets of the cosmos, develop innovative technologies, and address complex global challenges. However, despite our cognitive leaps, we still grapple with the limitations of our rationality, biases, and emotions, especially in today's increasingly complex and information-saturated world. As AI systems become more entwined with our daily lives and institutions, there is a growing need to design and deploy AI systems that augment human reasoning, foster critical thinking, and promote well-informed decision-making.&#13;
&#13;
This thesis investigates the potential for AI-enhanced reasoning systems and their impact on human decision-making. Specifically, it explores three distinct aspects of critical thinking with AI systems: (1) the development of AI logic-checking systems designed to help identify reasoning flaws, (2) examining the susceptibility of individuals to deceptive AI-generated explanations, and (3) assessing the potential of a novel AI-framed questioning interaction method to provoke critical thinking through a series of human subjects experiments.&#13;
&#13;
These investigations aim to shed light on the implications of AI systems on human reasoning and provide insights into designing AI interventions that meaningfully enhance our cognitive abilities. The findings demonstrate the potential for intelligently designed AI systems to support human reasoning, while also highlighting the potential risks associated with overreliance on these tools. By addressing these challenges, this thesis contributes to the ongoing conversation around the development of AI systems that advance our reasoning, and steps towards cultivating a discerning and rational citizenry capable of navigating the complexities of the modern world.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Network Optimization-based Approach for Identification of Illegal Trade in the Global Timber Supply Chain</title>
<link href="https://hdl.handle.net/1721.1/151970" rel="alternate"/>
<author>
<name>Hallermeyer, Cyrian H.</name>
</author>
<id>https://hdl.handle.net/1721.1/151970</id>
<updated>2023-09-01T03:22:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Network Optimization-based Approach for Identification of Illegal Trade in the Global Timber Supply Chain
Hallermeyer, Cyrian H.
Forest ecosystems play a crucial role in the global carbon cycle and provide numerous regional environmental and economic services. Thus, it is essential to limit their degradation due to human exploitation and the risk of climate change. To effectively regulate the production and trade of timber and to ensure that ambitious sustainability and legality targets are met, enforcement agencies need to reliably monitor the flows of timber products circulating in the global supply chain. However, available reported data on the trade of timber-based products at the global level is typically subject to a range of irregularities, including misreported and inconsistent data. Current estimates of these irregularities -- particularly, illegal trade -- analyze reported flows at the level of individual trade partners and do not account for the structure of global trade network. In this thesis, we attempt to address these limitations by imposing nodal volume balance across the entire network and develop a general framework to identify multiple link-level irregularities in trade data.&#13;
&#13;
Specifically, we present an optimization-based approach to model and identify data irregularities in the global timber supply chain. We evaluate the ability of this approach to recover flows of timber products volumes from perturbations to reported data on multiple links -- this is called the reconstruction problem -- and identify flows that are most likely to involve irregularities -- which is called the identification problem. These reconstruction and identification tasks essentially rely on the use of network optimization techniques. In this context, we explore both classic optimization formulations and matrix scaling-based algorithms. We extend the well-known formulation of matrix scaling algorithms to include prior knowledge of the reliability of the data. We propose a link-specific weighted iterative scaling algorithm (WIS) and a node-specific weighted iterative scaling algorithm (NSWIS). In doing so, we extend the current literature on matrix scaling algorithms by expanding the scope of their practical application to supply chain data correction problems.&#13;
&#13;
For the type of perturbations studied in the evaluation procedure, the WIS algorithm shows a strong ability to correctly reconstruct data, even under limited prior information on data reliability. Moreover, the combinations of the WIS with threshold-based identification models obtain satisfactory results on the identification phase (True Positive Rate of more than 75% for a False Positive Rate of less than 30%) even under limited prior information on data reliability. We evaluate the reconstruction and identification performance of the RIMs both on synthetic and real data. Our results support the relevance of a principled approach to network flow modeling and optimization for correcting and identifying irregularities in timber trade and timber production data.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Supernumerary Robotic Limbs for Next Generation Space Suit Technology</title>
<link href="https://hdl.handle.net/1721.1/151948" rel="alternate"/>
<author>
<name>Ballesteros, Erik</name>
</author>
<id>https://hdl.handle.net/1721.1/151948</id>
<updated>2023-08-24T03:47:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Supernumerary Robotic Limbs for Next Generation Space Suit Technology
Ballesteros, Erik
Extra-Vehicular Activities (EVAs) are considered one of the most complex operations an astronaut can perform during a spaceflight mission. Coordinating and executing EVAs are complex and costly affairs that are a necessity for any space vehicle; this is especially true for expanding the longevity of a spacecraft, like that of the International Space Station (ISS). A key challenge in planning EVAs is the amount of time an astronaut has to complete a series of tasks, which is inversely related to their metabolic load. Prior studies have determined that the bio-mechanics of a space suit wearing astronaut play a significant role in their metabolic load. In addition to this concern, another key challenge for astronauts conducting EVAs is to have access to a rigid tether to enable them full access to both of their arms when conducting a specific task. We propose the incorporation of a pair of wearable robots, called Supernumerary Robotic Limbs (SuperLimbs), which would be mounted on the xEMU’s Square Boss Interface (SBI), positioned such that each SuperLimb is on either side of the astronaut’s center of mass. The use of SuperLimbs during an EVA allows the astronaut to safely and efficiently move across a spacecraft in EVA. The SuperLimbs grab EVA handrails for securing the astronaut’s body, and guide the astronaut from one work location to another (thus reducing their overall work load). The incorporation of SuperLimbs onto the xEMU spacesuit forms a cooperative human-robotic system that can be modeled as a quadruped with two human arms and two SuperLimb grippers. Trajectory planning and control algorithms are developed as a quadrupedal locomotion problem, where the SuperLimbs act as followers while the astronaut operator is the leader. Furthermore, the quadruped human-robot system enables multiple points of contact at any point in the EVA, creating a secure bracing condition for the astronaut user that enhances both stability and controllability.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Barriers to the use of computational tools for embodied carbon reduction in structural engineering practice</title>
<link href="https://hdl.handle.net/1721.1/151946" rel="alternate"/>
<author>
<name>Smith, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/151946</id>
<updated>2023-08-24T03:03:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Barriers to the use of computational tools for embodied carbon reduction in structural engineering practice
Smith, Margaret
There is an immediate need to decrease carbon emissions to minimize the impacts of climate change, and building materials, which in 2019 accounted for 10% of global carbon emissions, have an important role to play in this reduction. As key stakeholders in the building design process, structural engineers must implement strategies to reduce embodied carbon. One strategy is using less material, and in academia, many methods and tools have been proposed to reduce embodied carbon through material efficiency. This includes parametric models that demonstrate how structural parameters impact embodied carbon and shape or topology optimized components which save considerable amounts of material compared to conventional alternatives. However, these tools are not used often in industry. To better understand why, a survey was distributed to practicing structural engineers in the northeast US which probed their views on embodied carbon and computational tools to reduce it. Case studies on parametric design, shape optimization, and topology optimization were presented, and participants were asked if they would use each tool and why or why not.&#13;
&#13;
A total of 38 structural engineers, representing 26 different employers, responded to the survey. Most respondents could name a strategy to reduce embodied carbon, however, low-carbon materials were mentioned far more than using less material, indicating that there is a need for increased education on the power of material efficiency to impact embodied carbon. As expected, respondents were most willing to use parametric design, followed by shape optimization, then topology optimization. For all case studies, time and/or cost increase was identified as the strongest barriers to their use. For parametric design, lack of power during the design process was also a strong barrier, as structural engineers often do not have complete control over all structural parameters. For shape and topology optimization, constructability and the robustness of optimized designs were key concerns. By formalizing the barriers to their use, this work enables researchers to create computational tools that are more likely to be adopted in industry. These tools have great potential to decrease embodied carbon emissions, and for this to be realized, they must be put into practice.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility Study of a Linear Generator Wave Energy&#13;
Converter With Adaptive Bistable Control</title>
<link href="https://hdl.handle.net/1721.1/151945" rel="alternate"/>
<author>
<name>Wunderlich, Alexander</name>
</author>
<id>https://hdl.handle.net/1721.1/151945</id>
<updated>2023-08-24T03:12:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Feasibility Study of a Linear Generator Wave Energy&#13;
Converter With Adaptive Bistable Control
Wunderlich, Alexander
Oceanic wave energy harvesting is a promising source of renewable energy that involves the conversion of oscillatory motion into electrical energy. However, the majority of traditional wave energy converters rely on intermediary mechanisms that increase complexity and can incur energy losses. Linear generators have emerged as a promising wave energy technology that bypass the limitations of intermediaries through direct mechanical to electrical energy conversion. The convention is to configure the generator so that incident waves excite the resonant frequency of the device. The irregular and broadband nature of ocean waves poses a challenge to this technique, as the device must be configured to respond to a wide range of incident frequencies. This thesis proposes a novel design for a linear permanent magnet generator that considers a tension leg platform oscillating in oceanic surge motion as its basis. The performance of the proposed device is analyzed using numerical simulations, and the potential for optimization techniques and the implementation of adaptive bistable control logic to improve broadband energy harvesting is investigated. The results demonstrate that these proposed alterations can increase the harvesting potential and efficiency of a wave energy converter, with the potential to contribute to the growing demand for renewable energy sources.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fast Adaptive Laws for Adaptive Control Under Stochastic Disturbances</title>
<link href="https://hdl.handle.net/1721.1/151944" rel="alternate"/>
<author>
<name>Fisher, Peter</name>
</author>
<id>https://hdl.handle.net/1721.1/151944</id>
<updated>2023-08-24T03:05:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Fast Adaptive Laws for Adaptive Control Under Stochastic Disturbances
Fisher, Peter
In this work, we consider two classes of adaptive laws for the adaptive control of a class of discrete-time nonlinear systems with all states accessible perturbed by a stochastic disturbance. First, we consider high-order tuner algorithms based on accelerated gradient methods for the optimization of convex loss functions, and derive a new adaptive law designed for stable adaptive control. Second, we review the state of the literature on recursive least-squares adaptive laws - especially those with variable-direction forgetting factor - and we derive an alternative to a recent method proposed in the literature.&#13;
&#13;
Recently, a high-order tuner algorithm was recently developed for the minimization of convex loss functions with time-varying regressors in the context of an identification problem. Based on Nesterov's algorithm, the high-order tuner was shown to guarantee bounded parameter estimation when regressors vary with time, and to lead to accelerated convergence of the tracking error when regressors are constant. In this work, we derive a new high-order tuner algorithm that preserves the accelerated convergence of the original under constant regressors, but that is also provably stable with the addition of projection to a compact set. This latter property allows us to apply the new high-order tuner to the adaptive control of a particular class of discrete-time nonlinear dynamical systems under stochastic disturbances.&#13;
&#13;
There has been a substantial body of literature on variable-direction forgetting methods for recursive least-squares-type adaptive laws. Recently, a new method has been developed that uses the SVD of the covariance matrix to apply directional forgetting. In this work, we place this method in the context of the broader RLS literature as well as other literature on variable-direction forgetting. We then use this context to argue that if the computation power is available for an SVD at every time step, it is better to simply use it to directly invert the covariance matrix at each time step rather than implementing variable-direction forgetting. We call this new adaptive law "Explicit Least-Squares" and show that ELS leads to provably stable adaptive control.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enhancing Gas Absorption with Nanoengineered Surfaces for Bubble Manipulation</title>
<link href="https://hdl.handle.net/1721.1/151942" rel="alternate"/>
<author>
<name>Joseph, Tal</name>
</author>
<id>https://hdl.handle.net/1721.1/151942</id>
<updated>2023-08-24T03:14:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enhancing Gas Absorption with Nanoengineered Surfaces for Bubble Manipulation
Joseph, Tal
Efficiently reacting gases with liquid absorbents is a crucial aspect of numerous industrial processes on a large scale. When the gas phase is in the form of discrete bubbles within an absorber unit, such as in bubble column absorbers or gas sparging systems, the effectiveness of these bubbles' reaction depends on carefully controlling their properties and flow. This study demonstrates the efficacy of a novel method for gas absorption into a liquid absorbent, which involves using nanoengineered surfaces to spread bubbles into their texture and enhance mass transport between the gas and liquid phases. This surface-enhanced direct injection approach for gas absorption yields more than a two-order-of-magnitude improvement in reaction rate compared to captive bubbles when using a moderately alkaline potassium hydroxide as an absorbent solution for carbon dioxide gas. While the average reaction rates of non-spreading bubbles typically decrease with bubble size, the surface-enhanced absorption of spreading bubbles reverses this trend, enabling the most rapid absorption for the smallest bubbles. Moreover, non-spreading carbon dioxide bubbles cannot be fully absorbed due to product aggregation at their interface, whereas spreading bubbles can avoid this regime by reacting more quickly than the aggregation process on rapid timescales. Finally, we propose this surface-enhanced direct injection method as an absorption technique that scales advantageously for small-scale or distributed modular absorber designs compared to the traditional large-scale absorber units currently used in industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Path Planning for Trajectory Guided Freehand Ultrasound Scan</title>
<link href="https://hdl.handle.net/1721.1/151941" rel="alternate"/>
<author>
<name>Lin, Qian</name>
</author>
<id>https://hdl.handle.net/1721.1/151941</id>
<updated>2023-08-24T03:33:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Path Planning for Trajectory Guided Freehand Ultrasound Scan
Lin, Qian
Medical imaging plays a crucial role in medical diagnosis and analysis. 3D medical imaging provides more comprehensive and greater anatomical detail of internal body structures when compared to traditional 2D images, allowing for more accurate measurement of organ and tumor volume, and prediction and monitoring of some disease progression. 3D medical images can be obtained through various imaging modalities, including magnetic resonance (MR), computed tomography (CT), and ultrasound (US). Among those modalities, freehand ultrasound is preferred for its cost-effectiveness, non-invasiveness, portability, safety, versatility, and real-time information.&#13;
&#13;
However, the lack of information on the position and orientation of the ultrasound probe makes it challenging to obtain 3D images from 2D ultrasound slices. Without the expert knowledge, the user may not acquire precise images on the region of interest (RoI). To address this issue, we proposed a novel path planning framework that provides real-time guidance for freehand ultrasound and reconstructs 3D images in real-time. A low-cost RGB-D camera with IMU module is mounted on a regular ultrasound probe to estimate the spatial placement of the probe with respect to the RoI, and the acquired ultrasound images are analyzed and registered into 3D voxel grid. After the user performs initial scan, the system guides the user to find missing areas shaded by obstacles such as bones, resulting in more accurate, detailed, and efficient 3D ultrasound imaging. We validated our system on an ultrasound phantom and demonstrated its ability to investigate the area beneath the obstacle. Additionally, we developed a visualization system for real-time probe movement guidance and image display.&#13;
&#13;
This study demonstrates the feasibility of implementing an online path planning approach with real-time guidance and high-attenuation area avoidance for freehand ultrasound scanning, even in scenarios where prior knowledge of the scanning area is not available. The proposed path planning system not only enhances the efficiency and precision of ultrasound imaging in clinical settings, but also facilitates the acquisition of high-quality 3D ultrasound images by non-expert users in a more convenient manner, potentially allowing for long-term health monitoring.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Soft Robotic System for Mechanical Assistance to the Diaphragm</title>
<link href="https://hdl.handle.net/1721.1/151940" rel="alternate"/>
<author>
<name>Quevedo-Moreno, Diego A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151940</id>
<updated>2023-08-24T03:47:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Soft Robotic System for Mechanical Assistance to the Diaphragm
Quevedo-Moreno, Diego A.
The diaphragm is a critical muscle for the respiratory system, responsible for up to&#13;
70% of the inspiration effort. Phrenic nerve trauma or neuromuscular disease can&#13;
generate severe diaphragm dysfunction that ultimately leads to respiratory failure.&#13;
The current treatment for patients with severe diaphragm dysfunction is permanent&#13;
airway tethering to mechanical ventilation, which greatly impacts patient’s quality&#13;
of life and autonomy by hindering activities like speech, swallowing, and mobility.&#13;
Soft robots are ideal to assist in complex biological functions like the contraction of&#13;
the diaphragm. Diaphragmatic mechanical assistance using implantable soft robots&#13;
has shown promising results in restoring respiratory function. However, the the soft&#13;
robotic system can be optimized to effectively assist the diaphragm. In this work, the&#13;
design and control of a fabric-shelled soft robotic pneumatic actuator are developed&#13;
to efficiently assist on the diaphragm motion and in the inspiratory effort. The soft&#13;
robotic system, developed in this work, is capable of significantly restore physiological&#13;
thoracic and abdominal pressurization levels in a respiratory simulator and demonstrates its potential as an alternative treatment for patients with severe diaphragm&#13;
dysfunction.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Single Degree of Freedom Solid Rotor Velocity Control Induction Drive</title>
<link href="https://hdl.handle.net/1721.1/151939" rel="alternate"/>
<author>
<name>Roman, Jean C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151939</id>
<updated>2023-08-24T03:07:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Single Degree of Freedom Solid Rotor Velocity Control Induction Drive
Roman, Jean C.
This thesis studies a single degree of freedom (DOF), two-pole, three-phase, solid rotor,&#13;
induction motor operating in closed loop angular velocity control via a proportional-integral&#13;
(PI) controller applying a constant amplitude, variable frequency drive.&#13;
&#13;
The stator consists of six iron teeth, evenly spaced and pointing radially inward, wound&#13;
with 160 turns of copper wire each in a three-phase, two-pole configuration. A steel enclosure&#13;
houses the stator and is supported by a 3D-printed polylactic acid (PLA) enclosure. The&#13;
wiring is initially connected in wye configuration without a neutral wire but later converted&#13;
to three independent phases, each with its own input and output wire. The teeth have a&#13;
nominal air gap of 0.5mm with the rotor.&#13;
&#13;
The rotor consists of a solid iron cylindrical core with a 1mm aluminum sleeve press&#13;
fitted on the outside. Two mechanical bearings center the rotor inside the stator. A single-&#13;
input single-output (SISO) PI controller commands three 750 mA amplitude currents with&#13;
variable frequency, and offset by 120 degrees to provide a 3-phase drive resulting in a rotating&#13;
magnetic field. Each coil is powered by a custom linear transconductance amplifier with 5&#13;
kHz bandwidth and 0.3 A/V DC gain.&#13;
&#13;
The controller receives feedback through a contact-less magnetic encoder providing a&#13;
linear voltage measurement of the rotor’s angle. We differentiate the position measurement&#13;
to estimate the angular velocity of the shaft. A small diametrically magnetized cylindrical&#13;
permanent magnet (PM) is attached to the end of the shaft and constrained by a 3-D printed&#13;
PLA fixture. During operation, we produced up to 1.6 mNm of torque and velocities of up&#13;
to 8,000 RPM.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Testing of a Respiratory Simulator for the Optimization of Soft Robotic Assistive Breathing Devices</title>
<link href="https://hdl.handle.net/1721.1/151938" rel="alternate"/>
<author>
<name>Tagoe, Jonathan</name>
</author>
<id>https://hdl.handle.net/1721.1/151938</id>
<updated>2023-08-24T03:03:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Testing of a Respiratory Simulator for the Optimization of Soft Robotic Assistive Breathing Devices
Tagoe, Jonathan
Diaphragm dysfunction can lead to respiratory difficulties and failure, requiring interventions like positive pressure ventilation that forces air into the lungs. Such interventions can interfere or hinder a patient’s quality of life, making activities like speech and swallowing extremely difficult. Surgically implanted soft robotic actuators have been explored to mechanically support diaphragmatic motion in a patient that has lost function of such an important muscle. Optimizing these actuators before surgery is paramount and requires “in situ” testing that may take months in between porcine terminal studies, let alone human testing. This thesis works towards developing a bench top model that can recreate the physiological biomechanics of the respiratory system to effectively test and optimize the design of diaphragmatic assist devices before implantation in specimen. &#13;
&#13;
Through the product development cycle undertaken in this thesis, a respiratory simulator was fabricated, assembled, and tested in order to facilitate the optimization of soft robotic pneumatic actuators. We find that the simulator is capable of recreating and maintaining physiological pressures in the major cavities of the body, with active diaphragmatic motion. We demonstrate the effectiveness of the modular design, allowing for rapid testing of different types of diaphragmatic assist actuators, patient conditions and breathing patterns. Through testing of the assist devices, we demonstrate their ability to recreate physiologically relevant pressure drops. &#13;
&#13;
This respiratory simulator lays the groundwork for the rapid development of implantable assistive breathing devices that serve as a new ventilation option that will liberate the airways and not sacrifice quality of life.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Local Shape Estimation Using Mechanochromic Structurally-Colored Tactile Sensors</title>
<link href="https://hdl.handle.net/1721.1/151937" rel="alternate"/>
<author>
<name>Thomsen, Max T.</name>
</author>
<id>https://hdl.handle.net/1721.1/151937</id>
<updated>2023-08-24T03:27:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Local Shape Estimation Using Mechanochromic Structurally-Colored Tactile Sensors
Thomsen, Max T.
Tactile perception is increasingly used in robotics to augment the robot’s sense of its environment and the objects that it manipulates, particularly in cases where visual systems alone prove inadequate. Existing tactile sensors feel their surroundings by sensing many different types of tactile signals such as pressure, contact position, or contact shape. This thesis introduces a novel method for measuring and reconstructing the shape of an object based on a tactile imprint, and describes the framework for this method, the fabrication of the necessary materials, and the subsequent testing and validation of the process. The procedure outlined in this work involves the use of a custom tiled mechanochromic structurally-colored film in conjunction with a digital camera, and calculates shape and strain information of the film based on how the observed colors shift when undergoing deformation. When combined with a transparent elastomeric pad, this arrangement can be used to deduce information about objects that are pressed into the pad by observing the deformation in the surface. This ability to measure the shape and strain state of a surface by leveraging the high resolution of modern image sensors together with color-dynamic films tiled in a checkered pattern may allow for more effective tactile sensors, and more broadly can provide a useful tool for research and industrial applications. While this work focuses specifically on tactile shape reconstruction, the methodology presented can similarly be applied to more general cases where shape or strain information of a surface is desired.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Influence of Interannual Precipitation Variability on Terrestrial Ecosystem Productivity</title>
<link href="https://hdl.handle.net/1721.1/151932" rel="alternate"/>
<author>
<name>Chen, Minghao</name>
</author>
<id>https://hdl.handle.net/1721.1/151932</id>
<updated>2023-08-24T03:45:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Influence of Interannual Precipitation Variability on Terrestrial Ecosystem Productivity
Chen, Minghao
This study investigated the impact of interannual precipitation variability on above-ground terrestrial ecosystem productivity in the Hulunbuir ecosystem, using time series analysis, regression analysis, and machine learning models. The study's primary goal was to enhance our understanding of the effects of precipitation variability on ecosystems and develop practical solutions for promoting ecosystem sustainability and adaptability under changing climate conditions. The study analyzed trends and patterns of interannual precipitation variability within the study area, investigated the historic relationship between precipitation and ecosystem productivity using regression analysis, developed and compared machine learning models to predict the impact of interannual precipitation variability on ecosystem productivity, evaluated model performance, and provided insights into the mechanisms underlying the impacts of interannual precipitation variability on ecosystem productivity. The findings of this study suggested that precipitation is an important driver of vegetation productivity in the Hulunbuir ecosystem, and the machine learning models, particularly LSTM and CNN models, were found to be effective in predicting NPP in different ecosystems. The study's findings can inform ecosystem-specific management strategies to optimize productivity and resilience to environmental change, as well as policy decisions regarding the sustainable use of natural resources and the mitigation of climate change impacts.&#13;
&#13;
Keywords: interannual precipitation variability, terrestrial ecosystem productivity, time series&#13;
analysis, machine learning models, climate change impacts.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Comparative Analysis of Domestic Municipal Data Governance Systems</title>
<link href="https://hdl.handle.net/1721.1/151927" rel="alternate"/>
<author>
<name>Jiminez Jamarillo, Aleja</name>
</author>
<id>https://hdl.handle.net/1721.1/151927</id>
<updated>2024-03-22T17:53:54Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Comparative Analysis of Domestic Municipal Data Governance Systems
Jiminez Jamarillo, Aleja
Prompted in part by the COVID-19 pandemic, cities in the United States are becoming increasingly aware of the need to improve how they govern their data. Data governance is generally understood to involve practices pertaining to the production, storage, analysis, and sharing of data either within or across organizations (Abraham et al. 2019, Beaulieu and Leonelli 2021). This thesis sought to investigate how cities in the United States are using policy tools to institutionalize data governance practices and what these policies reveal about how city governments are conceptualizing the nature of municipal data. Using a case study approach, an extensive &#13;
 literature review paired with staff interviews was conducted for four cities: Baltimore, Maryland; Denver, Colorado; Portland, Oregon; and San Francisco, California. A comparative analysis of these cities’ data governance systems reveal four primary findings. First, municipal governments are seeking to balance the established use of data for public transparency with stronger practices to protect privacy. Second, data governance can be deployed for various political purposes by city governments and that deployment can frame how its purpose is defined and pursued. Third, municipal data governance systems implicitly extend beyond governing data to managing the technologies and employees who generate and handle data. Finally, the staffing plan for data governance shapes the expertise brought to bear on normative questions surrounding data generation and management as well as the capacity for data governance teams to establish legitimacy within city government. These findings point towards four recommendations for municipal policy makers: embed data governance leaders in departments whose skillsets and approaches are aligned with the intended outcomes of data governance; integrate data governance efforts with technology acquisition practices; establish and resource department-level data fiduciaries; and explicitly treat all city employees as data workers to foster a comprehensive and sustainable data governance system.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Role for Electricity Transmission in Net-Zero Energy Systems: A Spatially Resolved Analysis of the Continental US</title>
<link href="https://hdl.handle.net/1721.1/151926" rel="alternate"/>
<author>
<name>Shi, Nicole Xiaoyang</name>
</author>
<id>https://hdl.handle.net/1721.1/151926</id>
<updated>2023-08-24T03:01:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Role for Electricity Transmission in Net-Zero Energy Systems: A Spatially Resolved Analysis of the Continental US
Shi, Nicole Xiaoyang
Due to climate change and the need for rapid emission reduction, new technologies including, hydrogen and negative emission technologies (NETs) including direct air capture and bioenergy with carbon capture and sequestration (BECCS) are being developed for integration into energy systems. Additionally, variable renewable energy (VRE) resources, which are expected to play a major role in decarbonization pathways, exhibit significant spatial variability and reliance on transmission infrastructure compared to existing fossil-fuel based energy systems, placing greater emphasis on transport and storage of material and energy. This case study evaluates pathways to a net-zero energy system in the continental US. To inform spatial infrastructure outcomes, we use an open-source energy system model that explores decarbonization pathways for the broader energy system under various technology availability and transmission network expansion assumptions. To attain a deeper understanding of technology interactions in a net-zero energy system, we use the Modeling to Generate Alternatives formulation that generates near-optimal solutions within a pre-defined threshold of the cost-optimal solution. We find that transmission network expansion enables the increased usage of high-quality wind resources. When the power sector is coupled with the hydrogen supply chain, the use of electrolyzers increase demand for electricity from VRE resources further. NETs, specifically BECCS, allow for the inclusion of natural gas in the generation mix while adhering to net-zero emissions targets. This approach helps mitigate the need for extensive transmission network expansion and VRE resources. We identify several transmission paths that policymakers should prioritize for expansion. Our analysis of near cost-optimal solutions provide confidence in the cost-optimal technology dependencies we identified.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Static Stability and Seismic Safety of Brunelleschi’s Dome of Santa Maria del Fiore</title>
<link href="https://hdl.handle.net/1721.1/151923" rel="alternate"/>
<author>
<name>Patel, Shailey</name>
</author>
<id>https://hdl.handle.net/1721.1/151923</id>
<updated>2023-08-24T03:09:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Static Stability and Seismic Safety of Brunelleschi’s Dome of Santa Maria del Fiore
Patel, Shailey
The dome of Santa Maria del Fiore is a long-standing pinnacle of engineering design creativity and fifteenth-century architectural grandeur. Its construction process and stability have been surveyed and researched for centuries. This thesis studies the dome of Santa Maria del Fiore dome in Florence, Italy in a two-fold exploration of its stability limits due to self-weight and horizontal ground acceleration. A parametric model using 2D equilibrium analysis is generated to quantify the minimum horizontal thrusts of the dome in the major and minor directions. The obtained minimum horizontal thrust values from the model in the major axis (5000 kN) and in the minor axis (3900 kN) are compared to existing values in literature. A simplified 2D analytical model predicts the collapse mechanism due to ground acceleration (0.15g) by adjusting the equilibrium analysis used to find the thrust values. This value is compared with experimental values obtained from a static tilt test, where the 3D-printed geometry of the dome and drum is slowly tilted until the point of collapse. The collapse mechanism forms at an angle of tilt of 17.6˚ in the weak direction (0.32g). A 3D analytical prediction is made by analyzing the observed experimental failure plane, which yields a collapse angle of 19.5˚ (0.35g), validating the experimental results. The difference between the 2D and 3D critical values can be attested to the various assumptions made in the conservative analytical model, including the negligence of hoop forces and friction. The analysis within this thesis demonstrates the safety of the dome of Santa Maria del Fiore under expected seismic activity.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Continuous Improvement Framework for a Multi-model Production Line</title>
<link href="https://hdl.handle.net/1721.1/151922" rel="alternate"/>
<author>
<name>Sandifer, Darron</name>
</author>
<id>https://hdl.handle.net/1721.1/151922</id>
<updated>2023-08-24T03:16:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Continuous Improvement Framework for a Multi-model Production Line
Sandifer, Darron
In manufacturing, it is vital to identify current or potential bottlenecks and create plans to either eliminate, mitigate, or prevent them. There are many ways in which to assess a given system and identify the bottleneck: visual inspection, production data, anecdotal evidence, and experience, to name a few. Most strategies are a blend of methods with experience and anecdotal evidence comprising the majority of the approach which leads to large discrepancies between assessors.&#13;
&#13;
This thesis details a method to standardize the assessment method of under performing portions of a manufacturing line while still giving the assessor the ability to leverage their experience, expertise, and creativity to solve the problem. This framework will be applied in a case study conducted at Nissan North America’s Canton, Mississippi Assembly Facility resulting in reclamation of approximately 50 minutes of production time eliminating the overtime requirement for a pair of manufacturing cells.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Modeling of Shipwide Navy Integrated Power and Energy Corridor Cooling System</title>
<link href="https://hdl.handle.net/1721.1/151921" rel="alternate"/>
<author>
<name>Chatterjee, Avi</name>
</author>
<id>https://hdl.handle.net/1721.1/151921</id>
<updated>2023-08-24T03:02:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Modeling of Shipwide Navy Integrated Power and Energy Corridor Cooling System
Chatterjee, Avi
Naval ship systems are increasingly requiring more and more electricity to power the myriad advanced offensive and defensive electrically-powered systems. The Zumwalt class destroyer was the Navy’s first fully electric ship. The next generation destroyer,&#13;
DDG(X), is also planned to be an electric ship. The ships of the future can thus be anticipated to employ 100 megawatts or more of electric power. This rise in electrical demand begets the need to transfer that power more efficiently through compact and robust power distribution systems. &#13;
&#13;
As part of an ongoing U.S. Navy research consortium of next-generation all-electric warships, the Design Laboratory of the Massachusetts Institute of Technology (MIT) Sea Grant Program is developing the Navy integrated Power and Energy Corridor&#13;
(NiPEC) to serve as the vessel’s power distribution system. The corridor comprises several modular compartments capable of operating independently or as part of a network to execute energy storage, conversion, protection, control, isolation, and transfer functions [18]. The power conversion process is carried out by the corridor’s integrated Power Electronics Building Block (iPEBB). The iPEBB is a comprehensive and self-contained converter configured to provide power-dense solutions to the ship’s stochastic and dynamic loads [45]. The thermal management of the iPEBB is a central challenge in being able to fully realize its advanced semiconductor technology, constrained by the provision of indirect liquid cooling methods and sailor-friendly accommodations vis-à-vis handling, user interface, and operation.&#13;
&#13;
Padilla et al. [36] conducted a preliminary analysis of Power Electronics Building Block (PEBB) heat dissipation strategies utilizing liquid-cooled cold plates across the dry interface of the PEBB’s external surface. Reyes [39] extended this analysis in proposing a first-pass design of a NiPEC liquid cooling system capable of servicing a single nominal compartment within the larger corridor architecture. However, this most recent design presents infeasible operational and maintenance aspects given the number of cooling components required to adequately cool all envisioned NiPEC corridors, compartments, and PEBB stacks.&#13;
&#13;
This thesis used a combination of first-principles thermodynamic analysis and multi-physics-based modeling to design a NiPEC liquid cooling system and architecture suitable for shipwide deployment. Using Reyes’ first-pass cooling system design as a starting point, additional design iterations of the computer-modeled system were conducted and analyzed for thermal management robustness, success against key performance benchmarks, and adherence to relevant military standards. Additional modeling and analysis were conducted to determine how the cooling system could be scaled to accommodate an entire future all-electric Navy destroyer warship. This analysis examined key architectural system design considerations such as the level of component redundancy, utilization of different loop and zonal cooling schemes, and system survivability and control.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Opportunity for Long Duration Storage Technologies: Thermal and Compressed Air Energy Storage</title>
<link href="https://hdl.handle.net/1721.1/151920" rel="alternate"/>
<author>
<name>Engelkemier, Seiji H.</name>
</author>
<id>https://hdl.handle.net/1721.1/151920</id>
<updated>2023-08-24T03:07:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Opportunity for Long Duration Storage Technologies: Thermal and Compressed Air Energy Storage
Engelkemier, Seiji H.
To mitigate more severe consequences of climate change, rapid decarbonization is necessary. The electric power sector contributes about 25% of US and global emissions, and its decarbonization is critical as other sectors become increasingly electrified. Intermittent renewable energy sources, namely solar photovoltaics and wind turbines have reduced emissions in the power sector. A key part in achieving higher rates of renewables adoption is energy storage. Particularly, long duration energy storage (LDES) is needed, for which the key variables are capital cost of energy capacity and discharge efficiency. There are few economical options available today for LDES aside from pumped hydropower storage, which is limited by geography. Fortunately, new technologies are under development. &#13;
&#13;
Thermal energy storage (TES) is a promising class of technologies because energy can be stored cheaply as heat. A TES system converts electricity to heat and converts it back to electricity when needed. TES systems can utilize cheap storage material, but they must address the challenges of low discharge efficiency and to a lesser extent, high capital cost of discharge power capacity. Existing studies have mostly focused on a specific subsystem, such as the power block or storage material, or a single TES system. Few studies have reported on how the needs of future power systems and TES technology options guide the design choices for a TES system. This thesis addresses the topic and presents the opportunity space for TES systems. Three common strategies for system design are identified that balance the coupled tradeoffs of cost, performance, and technical risk. The first strategy is retrofitting thermal power plants with TES to replace combustion processes and operate the plants as storage assets. The second is the development of higher efficiency power cycles, primarily closed Brayton cycles, for new storage plants operating with maximum temperatures generally under 1000°C. The third strategy utilizes storage materials and power cycles at temperatures significantly above 1000°C which requires considerable research and development prior to commercialization efforts.&#13;
&#13;
Compressed air energy storage (CAES) is another type of storage technology that is cited as a candidate for LDES. Geologic and economic considerations are found to be limiting factors in large scale deployment of CAES systems rather than technology development. However, in certain situations, CAES may be a valuable storage option. Therefore, compared to the optimism found in literature, a more pragmatic outlook on CAES is recommended to focus efforts on critical questions and avoid wasted resources.&#13;
&#13;
The levelized cost of storage (LCOS) is used to assess future, representative TES and CAES systems in LDES applications. A sensitivity analysis is performed on LCOS parameters to show the effect of design choices on system cost. From the technology and cost assessments, recommendations are made to guide TES and CAES development as options for LDES.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring Maneuvering Strategies for Heterogeneous Cooperative Navigation in Underwater Environments</title>
<link href="https://hdl.handle.net/1721.1/151918" rel="alternate"/>
<author>
<name>Flynn, Megan C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151918</id>
<updated>2023-08-24T03:38:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Exploring Maneuvering Strategies for Heterogeneous Cooperative Navigation in Underwater Environments
Flynn, Megan C.
Due to the challenges of the underwater environment and limited communication methods, undersea navigation is difficult. Autonomous underwater vehicles (AUVs) experience unbounded localization errors when operating below the surface. Range measurements between vehicles can be utilized to improve localization estimates. We define a two-agent team composed of a leader and a follower, in which the former has better navigational capabilities than the latter. The follower attempts to navigate to a destination while the leader aids in the follower’s localization by providing range measurements from varied locations. Planning the relative motion between agents is vital to ensuring that meaningful range measurements are provided to support an effective estimation of the follower’s pose.&#13;
&#13;
This work explores five different maneuvering strategies based on geometric and observability principles. After designing the strategies, we tested their impact on the localization quality of the team through extensive simulation results. To investigate the resilience of the strategies to environmental conditions, we altered the simulated ocean currents. For additional study we allowed the leader to operate at a higher speed to explore the relationship between energy use and estimation performance.&#13;
&#13;
Ultimately, the best maneuvering strategy was found to be the circling strategy due to its superior performance; however, the circling strategy used the most energy, especially with larger radii. Mission priorities may affect the selection of a maneuvering strategy; the zigzag and covariance squish strategies are still viable options as they do not suffer great performance loss when compared to the circling strategy.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measurement and Analysis of lubricant oil consumption in a single cylinder hydrogen IC engine</title>
<link href="https://hdl.handle.net/1721.1/151917" rel="alternate"/>
<author>
<name>Zakka, Ahmad</name>
</author>
<id>https://hdl.handle.net/1721.1/151917</id>
<updated>2023-08-24T03:42:56Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Measurement and Analysis of lubricant oil consumption in a single cylinder hydrogen IC engine
Zakka, Ahmad
Understanding, predicting, and reducing lubricant oil consumption (LOC) in IC engines has been the focus of this lab for decades. Lubricant oil consumed in internal combustion engines is a significant contributor to harmful gas and particulate emissions directly threatening the environment and human health. This work focuses on the development and analysis of a direct LOC measurement method of a hydrogen combustion single cylinder test engine. This method utilizes an FTIR device to measure carbon dioxide in the exhaust gas. Since hydrogen is not a carbon-based fuel, its combustion reaction does not yield carbon dioxide. The only source of carbon in the system is the lubricating oil. Using this understanding, the carbon dioxide concentration in the exhaust is converted to oil consumption.&#13;
&#13;
This measurement method was used to study the effect of liner surface roughness, oil control ring design, and piston clearance on oil consumption. The liner finish was found to have large impact on LOC, particularly for the ring pack with Three-Piece oil control ring (TPOCR).  Very rough liner drastically increases LOC with a TPOCR.  One implication is that liner finish may need to be changed when adapting HD diesel engines to natural gas or hydrogen by using a TPOCR.&#13;
&#13;
For a Twin-Land Oil Control Ring based ring pack, slots/holes on the vertical wall of the ring was found to be effective in controlling LOC when the liner roughness is high.&#13;
&#13;
The main contribution of this work is developing a reliable and accurate method for measuring LOC in a hydrogen combustion engine. The data collected from this system will contribute to the development of a digital twin model with the capability of predicting LOC in any engine.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Authentic Learning with Portfolios: A Combination that K-12 Education Needs</title>
<link href="https://hdl.handle.net/1721.1/151915" rel="alternate"/>
<author>
<name>Vozza, Angelo</name>
</author>
<id>https://hdl.handle.net/1721.1/151915</id>
<updated>2023-08-24T03:57:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Authentic Learning with Portfolios: A Combination that K-12 Education Needs
Vozza, Angelo
Education systems play a critical role in sustaining a society by equipping citizens with the mindsets and skills necessary for professional and personal success. The American K12 education system has unfortunately not kept pace though with the demands of the 21st century. Students need systemic changes that make learning more meaningful and more engaging of their existing skills and interests. Authentic learning practices, like Project-Based, Community-Based, and Work-Based Learning, make such changes by orienting instruction around topics relevant to students' experiences and allowing students to practice their knowledge in real-world settings. Schools can encourage the adoption of authentic learning by implementing a complementary practice like portfolios. Local successes in schools using authentic learning and portfolios separately demonstrate their joint viability, but a system that combines the practices and can scale nationally has yet to be discovered.&#13;
 &#13;
Using the local "existence proofs" as starting points, I developed a system architecture that addresses many known barriers to adoption, including the time/resource constraints of schools, colleges, employers and the inequitable access some students have to engaging learning experiences. This initial proposal did not, however, address constraints imposed by schools' accountability obligations nor stakeholders' uncertainty over their peers' readiness to adopt the system. By investigating how federal and state policies have enacted similar transformations, I determined that authentic learning portfolios will likely require government mandates. These mandates could face pushback, however, from families concerned that the proposal would hurt their student's college options. I also interviewed colleges to establish what changes to the proposal were needed to ensure their support and thus satisfy parents' concerns. My findings helped refine the proposed system architecture as well as outline the next steps needed to successfully implement the proposal.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Flexure-Based Device Enables Precise Quantitative Monitoring of Muscle Performance</title>
<link href="https://hdl.handle.net/1721.1/151913" rel="alternate"/>
<author>
<name>Lynch, Naomi L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151913</id>
<updated>2023-08-24T03:43:42Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Flexure-Based Device Enables Precise Quantitative Monitoring of Muscle Performance
Lynch, Naomi L.
Tissue engineering provides an avenue for improving our understanding of the contractile mechanisms of muscle. 3D engineered muscle models have been developed that mimic the structure and functionality of native muscle. These models have the potential to be used in a wide variety of clinical applications such as neuromuscular disease modeling and drug therapy testing. The contractile mechanisms of engineered muscle are often quantified by constraining the muscle on an elastomeric scaffold and measuring the scaffold’s deformation; however, structural imperfections in the scaffold can negatively impact the accuracy of the recorded contractile data. This paper proposes using a flexure-based device that enables decoding of muscle physiological signals – such as contraction force, contraction time, and relaxation time – in a more precise, reproducible, and automated manner.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The influence of current and ripple development on seagrass transplant survival</title>
<link href="https://hdl.handle.net/1721.1/151912" rel="alternate"/>
<author>
<name>Ishii, Jade</name>
</author>
<id>https://hdl.handle.net/1721.1/151912</id>
<updated>2023-08-24T03:43:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The influence of current and ripple development on seagrass transplant survival
Ishii, Jade
Seagrass restorations have been conducted globally but with low overall survival rates. To investigate the role of hydrodynamic energy from tidal currents in the survival of newly transplanted seagrass, Zostera marina rhizome fragments with living shoots were transplanted into a sediment bed and exposed to unidirectional flow in a flume. In accordance with planting techniques reported to improve restoration performance, garden staples were utilized to anchor transplants to the bed. Three flow conditions of increasing velocity were applied for a duration of six hours each, and current ripples developed and persisted in all cases. The ripples were characterized and related to the dislodgement of transplants from the sediment. The use of staples decreased the number of transplants that were dislodged. At lower velocities, transplant survival was further improved when the anchoring staple was oriented parallel to the direction of flow. Most of the transplants that were secured with a staple survived all velocity cases, even with average ripple amplitudes reaching the range of depth at which the roots and rhizomes were planted. These results can inform effective site selection and transplanting techniques for more successful seagrass restorations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Manufacturing of Educational Fiber Extrusion Device and Smart Factory</title>
<link href="https://hdl.handle.net/1721.1/151910" rel="alternate"/>
<author>
<name>Bradley, Russel</name>
</author>
<id>https://hdl.handle.net/1721.1/151910</id>
<updated>2023-08-24T03:07:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Manufacturing of Educational Fiber Extrusion Device and Smart Factory
Bradley, Russel
Fiber Extrusion Device (FrED) is a desktop fiber extrusion system that mimics continuous fiber draw process for hands-on learning and/or laboratory experience on data acquisition, control system, and smart manufacturing. It allows learners to perform experiments, vary manufacturing parameters and control system,  collect data, and perform analysis. Sucessful classroom activities have been conducted with FrED, however, the prior model is too costly to distribute to individual learners, given the rise of distant learning and MOOCs. A partnership with a university in Mexico, Tec de Monterrey, was formed to develop a low-cost FrED. This thesis covers the design, development and production of the low-cost variant in detail. Specifically discussing in depth the electronics system of FrED and the design for manufacturing and assembly (DfMA) process. An on-campus production and assembly facility, the FrED Factory, was made to mass produce FrEDs. The facility duals as a space for MIT students to learn about design and manufacturing. The FrED factory is undergoing digital transformation, aimed to streamline operations and to teach Industry 4.0 concepts. Three use cases are being developed: Machine Monitoring &amp; Analytics, Smart Assembly Station and Digital Inventory Management. This thesis also covers the educational initiatives that has formed around the FrED ecosystem, both on-campus and with our partner university, Tecnologico de Monterrey, that has been conducted during the past academic year.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Remote Sensing, Inference, and Intelligence in the Information Environment</title>
<link href="https://hdl.handle.net/1721.1/151905" rel="alternate"/>
<author>
<name>Galligani, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/151905</id>
<updated>2023-08-24T03:03:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Remote Sensing, Inference, and Intelligence in the Information Environment
Galligani, Thomas
This thesis considers the ways researchers and decision-makers deal with malicious actions in the information environment (IE). We are motivated by the profusion of research since 2016 aiming to understand, predict, and respond to various phenomena like mis- and disinformation within online social interactions. We begin by outlining three layers of complexity within this IE that make it exceedingly difficult to understand (strategic interaction, technological mediation, and cognitive obfuscation) and describe the framework of logical inference (induction, deduction, and abduction) that we use to assess research methodologies. We find that post-2016 literature focused on malicious actions in the IE has underappreciated insights from post-World War II propaganda analysis literature. We argue that researchers must separate modes of inference in their research, distinguishing between inductively testing a tool and abductively analyzing particular environmental conditions in order to provide results which are reusable and valuable to a decision-maker. This motivates our proposed methodological framework. Intelligence, Reconnaissance, and surveillance (ISR) -- a systematic way that the US military leverages research in remote sensing to understand complex physical environments -- provides a logical framework to ground this inferential distinction in research in the IE. Finally, we apply this methodology, developing a sensor which captures the influence operation tactic of reputation laundering, testing the sensor on a novel dataset of assassination-related Tweets, and find significant evidence (p&lt;0.0001) that our sensor's observations can capture this reputation laundering and integrate it into an analyst's workflow.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Role of Race and Place in Residential Solar Photovoltaic (PV) Adoption</title>
<link href="https://hdl.handle.net/1721.1/151904" rel="alternate"/>
<author>
<name>Jackson, Joy Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/151904</id>
<updated>2023-08-24T03:57:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Exploring the Role of Race and Place in Residential Solar Photovoltaic (PV) Adoption
Jackson, Joy Kelly
The urgency of addressing climate change and grid decarbonization in the United States necessitates the rapid deployment of clean energy technologies at scale.  Residential solar photovoltaic (PV) technologies have emerged in the past decade as one such technology as a result of substantial cost declines, though market penetration remains low.  New government initiatives and policy incentives have been enacted to encourage the uptake of these technologies, however recent research has documented distributional challenges related to their deployment.  Building on emerging studies focused the racial equity implications of residential solar PV deployment, this research implements a series of regression models on two, national solar installation datasets, controlling for market, policy, and demographic variables.  The primary goal of this work is to systematically evaluate the effect of race and ethnicity on 1) the probability of a community having at least one solar installation and 2) the diffusion of solar PV technologies, defined as the total number of installations in a community.  Results indicate strong evidence that communities classified as majority-Black are associated with decreased likelihood of having any solar at all, and fewer installations overall, in most of the specified models. The results vary for majority-Hispanic communities, with observed disparities present in some of the models. Controlling for certain demographic variables has differentiated effects for different racial and ethnic majority classifications, due to the cumulative impacts of socioeconomic disadvantage for those groups.  The study concludes with a discussion of policy implications, methodological limitations, and avenues for future policy research to support an equitable clean energy transition.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Moving towards a more sustainable model of energy production &amp; consumption: a case for Indonesia</title>
<link href="https://hdl.handle.net/1721.1/151903" rel="alternate"/>
<author>
<name>Watel-Dehaynin, Tristan</name>
</author>
<id>https://hdl.handle.net/1721.1/151903</id>
<updated>2023-08-24T03:59:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Moving towards a more sustainable model of energy production &amp; consumption: a case for Indonesia
Watel-Dehaynin, Tristan
This thesis provides an assessment of Indonesia’s energy infrastructure following the decarbonization objectives set forth by the government at the G20 conference in Bali in 2022. Its goal is to compare the current models of production to local development objectives, and assess the state of key renewable energy sectors through the lenses of policy, technology, economic development, social stability and environmental conservation. It starts by providing historical context regarding the development of Indonesia as a country, looking at the influence of different civilizations over the land that is known today as Ibu Pertiwi. This assessment finds that the political and cultural spectrum of the country is highly diversified, and that democracy is still in the process of being fully established. The second part assesses the current policy environment and offers various tools to complement it. It finds that existing policy does not currently support the growing renewable energy industry. Solutions proposed include financial support for the national energy utility, an increase in the existing carbon tax, a phase out of fossil fuel subsidies, enhanced development of the private energy sector, and the application of energy standards. The third and final part reviews the growth of three key renewable energy markets: geothermal, solar and wind energy. It finds that, while resources are abundant, none of these markets have yet reached the pace of development expected by the government, mostly due to a lack of encompassing regulation, existing infrastructure and funding.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scenario Planning Framework &amp; Sensitivity Analysis for New Orthopedic Sets in the Spine Platform</title>
<link href="https://hdl.handle.net/1721.1/151902" rel="alternate"/>
<author>
<name>Vincent, Alura</name>
</author>
<id>https://hdl.handle.net/1721.1/151902</id>
<updated>2023-08-24T03:03:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Scenario Planning Framework &amp; Sensitivity Analysis for New Orthopedic Sets in the Spine Platform
Vincent, Alura
Spinal surgeries are a critical part of providing patients relief from spinal degenerative diseases or deformities. Johnson &amp; Johnson is developing a new spinal product in the Thoracolumbar space that builds on features of their legacy products in order to provide patients with high-quality pain relief.&#13;
&#13;
The goals of this project are twofold within the Thoracolumbar spine family of products. The first goal is to understand how inventory can be modeled over a long-time horizon for a new product launch. Forecasting over a long time-horizon is difficult due to uncertainty and exacerbated for new products due to a lack of historical data. The second goal is to understand the implications of various product launch scenarios on the broader Spine product family.&#13;
&#13;
To accomplish the first goal, a baseline model was created and a sensitivity analysis was conducted to analyze the impacts of changing prices and cost of goods sold on the profitability of the product family. The second goal was approached by developing a scenario model framework for product launches within the Spine business. The baseline model provided the team with an understanding of the most critical drivers of gross profitability for this product. The scenario framework provided a structured way for the team to identify and prioritize scenarios.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Farm-scale Water Management in Adaptation to Climate Change in Morocco</title>
<link href="https://hdl.handle.net/1721.1/151901" rel="alternate"/>
<author>
<name>Vasseur Bendel, Aurélien</name>
</author>
<id>https://hdl.handle.net/1721.1/151901</id>
<updated>2023-08-24T03:18:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Farm-scale Water Management in Adaptation to Climate Change in Morocco
Vasseur Bendel, Aurélien
Morocco is already experiencing high levels of water scarcity, and rainfall is predicted to decrease by 20% to 50% under different climate change scenarios. As currently Morocco relies on large reservoirs built to achieve basin-scale water resources management, small-scale reservoirs are investigated here as a possible way to adapt to these dryer conditions and to collect overland flow for irrigation purposes before its evaporation or infiltration into the ground. We investigate a potential shift from basin-scale to farm-scale water resource management. A prototype of such small reservoirs has been built in the experimental farm of Benguerir and this thesis studies its catchment as well as the extent to which this technology could scale up in other regions of Morocco. Runoff production in the form of overland flow is simulated according to the Green-Ampt model while considering the formation of a thin crust of clay typical of dry environments such as southern Morocco. Overland flow is used as input to different models of reservoir management in order to determine the optimal capacity of a potential reservoir in a particular location as function of its catchment area, rainfall pattern, soil type, cost of construction, water price, as well as crops water requirements. Within reasonable assumptions, capacities close to the reservoir in Benguerir (4000 m³) are estimated. However, the results are sensitive to multiple partially unknown parameters such as soil heterogeneity, the intra-day distribution of rainfall and the ratio between construction cost and water price.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Microneedles for Drug Delivery in Aquaculture</title>
<link href="https://hdl.handle.net/1721.1/151900" rel="alternate"/>
<author>
<name>Wolfe, Colleen</name>
</author>
<id>https://hdl.handle.net/1721.1/151900</id>
<updated>2023-08-24T03:25:31Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Microneedles for Drug Delivery in Aquaculture
Wolfe, Colleen
Aquaculture is a rapidly growing industry that can address increased food demand from population growth as well as overfishing and other environmental concerns from traditional fishing methods. However, a major challenge in aquaculture is the spread of disease from close quarters of the fish. Current fish vaccination methods include oral, immersion, and injection, with injection being the most effective but also the most cumbersome to implement. This project proposes an alternative by using biocompatible microneedles that can be applied in situ and dissolve to release the drug. The focus of this project is the needle fabrication method and coating selection to provide necessary mechanical strength to withstand aquatic environments. It was found that hollow, silk microneedles coated in shellac/ethanol coatings of 33.3% w/w following the two-step method of full dip coating then tip-only dip coating was able fully coat microneedles. Compression testing was done on individual needles in their dry state and after 30 minutes of soaking in deionized water and seawater. A constant increase in force from the onset of testing across all needles showed little difference between all samples observed, indicating that the needle should be able puncture fish skin.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Laser-Induced Particle Impact Testing in High-Pressure Oxygen Environments</title>
<link href="https://hdl.handle.net/1721.1/151898" rel="alternate"/>
<author>
<name>Alyassini, Samair</name>
</author>
<id>https://hdl.handle.net/1721.1/151898</id>
<updated>2023-08-24T03:59:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Laser-Induced Particle Impact Testing in High-Pressure Oxygen Environments
Alyassini, Samair
Particle impact ignition is an important source of metal fires in the high-pressure oxygen environments found in the turbines of oxygen-rich turbopumps. Understanding of particle impact ignition has been hindered by experimental challenges in reproducing this phenomenon under controlled laboratory conditions. This study addresses these challenges through the development of a specialized particle impact rig that integrates laser-induced particle impact testing (LIPIT) into an oxygen-compatible pressure vessel, thus enabling precise control over environmental conditions (target temperature, oxygen pressure) as well as impact variables (particle size/shape, impact velocity). This thesis describes the design of the oxygen-compatible pressure vessel, emphasizing considerations such as stress analysis, materials selection, oxygen-compatibility, and integration with the LIPIT system. The thesis concludes with pathfinding experiments successfully demonstrating particle ignition in a prototype rig, providing in situ images of single particle ignition events using application-relevant materials and particle sizes. Future work will use this rig to characterize the effects of operating conditions and material choices on susceptibility to particle impact ignition with a view toward developing more durable oxygen-compatible hardware for next-generation staged combustion rocket engines.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Impact of Biochemical and Mechanical Stimuli on Motor Neuron Growth</title>
<link href="https://hdl.handle.net/1721.1/151897" rel="alternate"/>
<author>
<name>Bu, Angel</name>
</author>
<id>https://hdl.handle.net/1721.1/151897</id>
<updated>2023-08-24T03:46:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Impact of Biochemical and Mechanical Stimuli on Motor Neuron Growth
Bu, Angel
Peripheral nerve injuries are one of the most prevelant trauma injuries, and the current golden standard for treatment is an autologous nerve graft. The use of tissue engineered nerve grafts has increased in recent years, a neural scaffold, cellular or acellular, is utilized to promote nerve repair. Prior research has shown that transcutaneous optogenetic stimulation on grafted engineered tissue can promote reinnervation and angiogensis in a rat volumetric muscle loss model. Our study queried the individual effects of biochemical and mechanical stimuli on neuronal growth to push the field of neuromuscular systems forward. We utilized optogenetic stimulation to emulate the biochemical effects and a magnetic fibrin platform to isolate the mechanical effect. To develop our magnetically actuatable substrate we optimized a fibrin hydrogel that would have a similar stiffness to skeletal muscle. Then, we added rectangular segments of 1:10 PDMS with 25% v/v 4 micron iron microparticles. These rectangular segments within the fibrin hydrogel were then cyclically actuated by a permanent neodymium magnet. Our results showed a substantial increase in neurite outgrowth in the experimental group which was supplemented with optogenetically exercised media from a muscle monolayer. The isolated biochemical effect was a substantial increase in the rate of neurite growth between the groups. In our preliminary neuromuscular system, we saw a degree of co-localized alignemnt between the neurites and differentiated muscle. This neuromuscular protocol seems to have physiological alignment similarities to in vivo tissue. We quantified alignment through a Fast Fourier Transform of the image data of the separate imaging channels, RFP for muscle and GFP for motor neuron. Finally, our magnetic fibrin platform found no significant increase in myofiber length and width when mechanical stimulation was applied after myoblast differentiation. In future research, we are exploring the biochemical and magnetic stimulation on our neuromuscular co-culture and actuate the myoblasts at an earlier cellular stage to impact alignment. In conclusion, our studies found that stimulated media aids in neurite outgrowth. In future research, we will perform RNAseq on our systems to verify the specific biological pathways and upregulated growth factors.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Temperature and Thermal Noise Suppression for Precision Mechanical Experiments</title>
<link href="https://hdl.handle.net/1721.1/151896" rel="alternate"/>
<author>
<name>Fife, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/151896</id>
<updated>2023-08-24T03:00:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Temperature and Thermal Noise Suppression for Precision Mechanical Experiments
Fife, Dylan
There is currently a lack of experiments that would prove whether gravity exists as a quantum field. One possible proof of the quantum nature of gravity would be to entangle massive quantum harmonic oscillators. This quantum harmonic oscillator acts as a resonant sensor for the entanglement with gravity. The quality factor of a resonant sensor must be sufficiently high such that the sensor is not dominated by thermal noise and the sensor can be cooled to the ground state. This thesis creates scaling laws for the interaction between the mass size bonded to a membrane resonator and the resonator's quality factor. With such a resonator, the entanglement is anticipated to be weak and requires extensive averaging to achieve statistically significant measurements. As such, the creation of a long time stable environment is critical. Thus, the lab temperature where the experiment will be run was stabilized to an integrated deviation of 20mK from 1K. This resulted in a reduction of laser position noise by a factor of 2.7x.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Layer-by-Layer Single-crystal Two-dimensional Material Growth by Geometric Confinement</title>
<link href="https://hdl.handle.net/1721.1/151894" rel="alternate"/>
<author>
<name>Lee, Doyoon</name>
</author>
<id>https://hdl.handle.net/1721.1/151894</id>
<updated>2023-08-24T03:24:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Layer-by-Layer Single-crystal Two-dimensional Material Growth by Geometric Confinement
Lee, Doyoon
Two-dimensional (2D) transition metal dichalcogenides (TMDs) and their heterostructures have been widely studied for next-generation electronics. However, the following critical challenges have hindered them from their commercialization: 1) precise layer control during their growth, 2) maintaining single crystallinity at wafer-scale, and 3) inevitable transfer-process to fabricate heterostructure for various next-generation applications such as spintronics, valleytronics, and optoelectronics.&#13;
&#13;
This thesis introduces a confined-growth technique that can overcome the aforementioned hurdles simultaneously by introducing a geometric SiO₂ mask that has growth selectivity from the underlying substrate. As micrometer-scale SiO₂ trenches reduce the growth duration substantially, single-domain WSe₂ and MoS₂ arrays are obtained on an arbitrary substrate at wafer-scale by filling the trenches before the second layer of nuclei is introduced, thus enabling layer-by-layer growth without requiring epitaxial seeding.&#13;
&#13;
In addition, subsequent MoS₂ growth on the WSe₂ arrays yields MoS₂/WSe₂ heterostructures. Therefore, we for the first time demonstrate single-domain TMDs arrays and their heterostructures at wafer-scale with controllable thickness, which of performances are comparable to that fabricated from TMDs flake. This confined-growth technique not only can overcome key obstacles of 2D materials, but also provide a platform with great potential for next-generation 2D-material-based applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Redesigning diabetic foot risk assessment for amputation prevention in low-resource settings: Development of a purely mechanical plantar pressure evaluation device</title>
<link href="https://hdl.handle.net/1721.1/151893" rel="alternate"/>
<author>
<name>Reddie, Madison</name>
</author>
<id>https://hdl.handle.net/1721.1/151893</id>
<updated>2023-08-24T03:50:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Redesigning diabetic foot risk assessment for amputation prevention in low-resource settings: Development of a purely mechanical plantar pressure evaluation device
Reddie, Madison
As global diabetes rates skyrocket, diabetic foot complications constitute a massive and rapidly growing global health problem, causing one million lower-extremity amputations every year. These amputations are typically preceded by largely preventable diabetic foot ulcers (DFUs). However, 80% of the world’s more than half a billion diabetics now live in low- and middle-income countries, where many healthcare settings lack the resources to implement recommended diabetic foot risk assessment and risk-based DFU prevention practices. Thus, the objective of this thesis was to redesign diabetic foot risk assessment specifically for low-resource settings in order to enable more efficient resource allocation for amputation prevention.&#13;
&#13;
To this end, a novel, low-cost, purely mechanical plantar pressure evaluation device was designed. The device consists of a grid of plastic bistable compliant mechanisms whose geometries can be tuned to generate a desired pressure threshold at which one part moves to a second stable position. The grid therefore presents a visual series of binary outputs in response to applied pressure. By having diabetic patients step on the device, non-specialist healthcare providers can easily assess patients' plantar pressures, which are known to be predictive of future DFU. A prototype was used to solicit feedback from 20 healthcare providers in Kenya. A design iteration was conducted based on their feedback, and an updated prototype was fabricated. The ability of this prototype to detect high plantar pressures was tested in a study with 41 healthy subjects. The prototype demonstrated a specificity of 100% and a sensitivity of 25.6%, though sensitivity reached 60% for heavier subjects. Sensitivity could likely be significantly improved by lowering the device's profile and increasing the sensing area. Strained health systems may then be able to use this device to allocate scarce healthcare resources more efficiently to prevent costly DFUs and amputations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing Hydrodynamic Interactions of Underwater Vehicles in Close Proximity Using an Identical Ellipse Pair</title>
<link href="https://hdl.handle.net/1721.1/151892" rel="alternate"/>
<author>
<name>Rhodes, Preston W.</name>
</author>
<id>https://hdl.handle.net/1721.1/151892</id>
<updated>2023-08-24T03:30:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Characterizing Hydrodynamic Interactions of Underwater Vehicles in Close Proximity Using an Identical Ellipse Pair
Rhodes, Preston W.
The hydrodynamic interactions between two identical 6:1 ellipses in close proximity were investigated using a 2D immersed interface method simulator in a viscous, rotational flow at Re=1500. Interactions in tandem, side-by-side, and staggered arrangements were characterized based on changes to the drag, lift, and yaw moment coefficients experienced by the ellipses. The drag and lift results agreed with existing studies of 2D cylinders performed in subcritical flow regimes. The drag interactions were divided into five regions based on changes to the individual ellipses and the overall system. The lift was repulsive and, for the closest parallel configurations, up to four times the value of drag. An overtaking maneuver was investigated by introducing a relative velocity between the ellipses. When both ellipses were moving, the lift was repulsive throughout the maneuver. The mean drag of the slower ellipse was mostly unaffected; although the largest instantaneous drag increase reached 2.5 times that of an isolated ellipse at the highest relative velocity, this was matched by a similar drag decrease in the second half of the maneuver. The drag of the faster ellipse was relatively unaffected by the overtaking maneuver. When one ellipse was stationary, the lift transitioned from repulsive to attractive as the moving ellipse passed the stationary ellipse. The stationary ellipse experienced a significant increase in mean drag at higher overtaking speeds, reaching more than half the value of an isolated ellipse moving at Re=1500. Its lift also changed significantly and was similar in magnitude to the drag. The overtaking ellipse experienced a three-to-four-fold increase in mean drag at all speeds, a thirty-fold increase in peak drag at the highest speed, and a mean lift similar in magnitude to the mean drag. The findings of this study can be used to inform fuel-efficient swimming configurations for underwater vehicles traveling in formation, as well as to increase safety when maneuvering in close proximity.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparative Energy Efficiency Analysis for Hydrogen and Jet Fuel in Next-Generation Long-Haul Aircraft</title>
<link href="https://hdl.handle.net/1721.1/151889" rel="alternate"/>
<author>
<name>Salgado Bobadilla, Diego Andre</name>
</author>
<id>https://hdl.handle.net/1721.1/151889</id>
<updated>2023-08-24T03:23:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Comparative Energy Efficiency Analysis for Hydrogen and Jet Fuel in Next-Generation Long-Haul Aircraft
Salgado Bobadilla, Diego Andre
The aviation sector aims to reach net-zero CO₂ emissions by 2050. Consequently, the industry must rely on a fuel with no life-cycle CO₂ emissions. Liquid hydrogen offers the potential to provide zero in-flight CO₂ emissions and low life-cycle CO₂ if produced from non-fossil electricity. While wide-body aircraft account for approximately 43% of in-flight CO₂ emissions, few studies have focused on hydrogen-powered aircraft of this size. Additionally, the performance of these aircraft in off-design missions is not typically discussed in the literature. A first-principles based approach was used to model long-haul hydrogen-powered aircraft and quantify fuel burn performance across a range of off-design missions. No engine thermodynamic improvements from using cryogenic fuel were assumed. Furthermore, sensitivity analyses were performed with respect to aircraft design range, material structural strength, and engine performance. This study shows that hydrogen-powered aircraft require roughly 2% less fuel energy at the design mission than conventional jet fuel aircraft. However, hydrogen-powered aircraft require approximately 10-30% more fuel energy for off-design missions between 1,000 and 4,000 nmi compared to jet fuel aircraft. While reducing the design range to cover 95% of all wide-body flights decreases this off-design fuel burn penalty, LH₂ aircraft still have a 5-25% increase in energy required to fly missions between 1,000 and 4,000 nmi relative to conventional aircraft. Additionally, the study indicates that improving material strength or engine performance only has a marginal effect on the relative fuel energy required between LH₂ and jet fuel aircraft.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Protecting Our Investment: Solving Fast Response Cutter Corrosion</title>
<link href="https://hdl.handle.net/1721.1/151888" rel="alternate"/>
<author>
<name>Patnode, Isabelle Claire</name>
</author>
<id>https://hdl.handle.net/1721.1/151888</id>
<updated>2023-08-24T03:36:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Protecting Our Investment: Solving Fast Response Cutter Corrosion
Patnode, Isabelle Claire
The USCG Fast Response Cutter (FRC) fleet is experiencing corrosion at an alarming rate in the propulsion shaft tunnels. An investigation into this problem was conducted from the perspectives of “root cause” and “prevention.” Root causes for the corrosion stem from an interaction in a complex, two-stage galvanic protection system on-board the ship that uses both passive zinc protection and impressed current cathodic protection (ICCP) from an active, feedback-controlled power supply. By using custom measuring instruments and applying them on an in service FRC in order to better understand the complications with galvanic protection on the FRC, crucial insights were discovered. The ICCP power supply unit is intended to prevent corrosion by actively injecting current through anodes in order to raise the magnitude of the voltage measured between the reference electrode and the hull. When designing the FRC, it was expected that a combination of ICCP and passive zincs would protect the hull steel in tandem; however, this has not been the case along the entirety of the ship. The ICCP system is unable to accurately determine the reference potential, a useful indicator for whether the hull steel is adequately protected from corrosion, in every area of the ship, allowing some areas to corrode at an accelerated rate. This report details a full summary of  analysis and results, along with a review of laboratory experiments and field experiments with several FRCs in the USCG fleet concluding with specific, actionable suggestions for mitigating corrosion in the FRC stern tube. Additionally, this report outlines how non-intrusive load monitoring, which has a proven track record for preemptively recognizing faults in shipboard equipment, analyzed the ICCP system and how this relates to shipboard microgrids.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Hydrogel Adhesive Marine Sensing System: Design,&#13;
Mechanism, and Applications</title>
<link href="https://hdl.handle.net/1721.1/151887" rel="alternate"/>
<author>
<name>Duque Londono, Camilo</name>
</author>
<id>https://hdl.handle.net/1721.1/151887</id>
<updated>2023-08-24T03:44:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Hydrogel Adhesive Marine Sensing System: Design,&#13;
Mechanism, and Applications
Duque Londono, Camilo
Marine animals offer a wealth of knowledge that goes beyond their role as a protein source for humans. Through careful observations, they offer valuable insights into the health of our oceans and provide inspiration for the design and control of unmanned underwater vehicles. Additionally, Research into their migrational patterns and response to external stimuli such as sonar, drilling, and offshore energy production is also important for informing government agencies and engineers of the potential effects of such activities on local fauna.&#13;
&#13;
Traditionally, sensors used to gather data from marine animals have been invasive and cumbersome, involving the use of subcutaneous anchors, bolts, or sutures. Traditional methods limit studies to large, resilient animals such as dolphins and whales, while smaller, more fragile animals are understudied. In this study, a hydrogel adhesive marine tagging system has been developed that offers rapid (less than 20 seconds), robust (interfacial toughness &gt; 160 J m−2 ), conformable, and non-invasive sensor integration on a variety of marine tissues, particularly soft and flexible ones. This system was tested on live marine animals with varying topological features, from soft skins to hard shells, to evaluate its effectiveness against current methods. The system is then used to conduct a kinematic study of skate locomotion, using a sensor network deployed across a skate fin, to showcase how this tool could be used to aid bio-inspired robotic studies. Further, hydrogel mechanics and design strategies are also presented, providing a deeper understanding of the adhesive system and its mechanism. Results from the various experiments show that this system has the potential to revolutionize the field by providing a reliable, quick, and non-invasive method of sensor adhesion.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating Aerosol Composition with Low Cost Optical&#13;
Particle Counters</title>
<link href="https://hdl.handle.net/1721.1/151886" rel="alternate"/>
<author>
<name>Sharpe, Will</name>
</author>
<id>https://hdl.handle.net/1721.1/151886</id>
<updated>2023-08-24T03:20:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating Aerosol Composition with Low Cost Optical&#13;
Particle Counters
Sharpe, Will
Particulate matter (PM) is a serious threat to human health and contributes to millions of premature deaths a year globally. Access to source attribution and compo-sitional data of PM can have many benefits from easier regulation to enabling a better understanding of the negative health effects associated with PM. Acquiring composi-tional data for ambient PM generally has a high associated cost and is done using complex instrumentation, manual postprocessing, and labor intensive lab work. These approaches produce very high quality data, but have low spatiotemporal resolution and a high cost. This work explores a novel method to generate basic compositional data for ambient PM with low-cost, easily deployable apparatuses in concert with a simple fully connected neural net. Simulated effects of thermal denuders as well as dryers/humidifiers are used to perturb aerosols before they enter simulated low-cost optical particle counters (OPCs). This provides information on the volatility and hygroscopicity of the aerosols. These OPC outputs are processed programmatically and fed into a neural net to classify what category an incoming aerosol belongs to. This method is run for both compound-derived categories which mimic real PM sources (Sea Salt, Biomass Burning, Dust, and Urban Smog), and property-derived aerosols which present more idealized conditions. The results of this method are near-perfect classification for single mode aerosol distributions and over 90% correct classification for two mode aerosol distributions. The results on the property-derived aerosols have shown robustness to changing aerosol properties, as well as to changing apparatus and ambient conditions. This work provides proof of concept for future real world experiments to verify this method and presents an experimental setup for this purpose. Having access to compositional data for ambient PM should allow access to PM sources at a very high spatiotemporal resolution for a relatively low price. This basic source attribution could provide the data needed for better informed regulation as well as future scientific work.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Understanding Soil Carbon Signatures from Hyperspectral Reflectance Data using Spectral Unmixing</title>
<link href="https://hdl.handle.net/1721.1/151885" rel="alternate"/>
<author>
<name>Zeng, Xinyi</name>
</author>
<id>https://hdl.handle.net/1721.1/151885</id>
<updated>2023-08-24T03:01:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Understanding Soil Carbon Signatures from Hyperspectral Reflectance Data using Spectral Unmixing
Zeng, Xinyi
Soil carbon stocks have been depleted mainly due to human activities, and there is a potential for soil carbon sequestration through regenerative agricultural practices, forest restoration, and similar interventions. The traditional laboratory treatments of soil samples provide ground-truth soil carbon content, but they are usually costly, time-consuming, and provide only one-time measurements with limited spatial coverage and resolution. The accelerating development of soil spectroscopy offers an opportunity for cheaper, more immediate, and continuous measurements of soil carbon content. Moreover, the recent advancement of hyperspectral imagers has significantly increased spectral resolution, allowing more granular information to be captured. These devices offer a potentially more accurate methodology to quantify and monitor soil properties globally. Nevertheless, there is no consensus on the optimal practices for soil carbon content estimation using hyperspectral reflectance data. Therefore, this thesis tests whether it is feasible to leverage spectral linear mixing models to decompose soil hyperspectral reflectance data into interpretable soil component spectral signatures and abundances. The results demonstrate that the proposed spectral linear mixing model can predict soil organic carbon (SOC) spectral signature and mass abundance with a nearly 0 average bias. However, biases can still be significant for certain spectra. To reduce these biases, it is essential to characterize the problem more effectively. Dedicated soil spectral data collection efforts designed explicitly for unmixing applications could enhance the quality of the results and contribute to a more comprehensive understanding of SOC spectrum and abundance. These findings motivate further development and refinement of spectral mixing models, as well as research into the application of hyperspectral reflectance data to soil property analysis.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Performance Analysis of Frequency-Shift Keyed Transmitter using Rapidly Tunable Lasers</title>
<link href="https://hdl.handle.net/1721.1/151883" rel="alternate"/>
<author>
<name>Pan, Carol</name>
</author>
<id>https://hdl.handle.net/1721.1/151883</id>
<updated>2023-08-24T03:27:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Performance Analysis of Frequency-Shift Keyed Transmitter using Rapidly Tunable Lasers
Pan, Carol
Optical Frequency Shift Keying (FSK) is a modulation scheme that encodes data in the wavelength of a carrier signal. Due to the large amount of optical bandwidth available in the erbium, 1.55-&#120583;m telecom band, FSK can potentially utilize a wide spectrum to achieve multi-Gb/s channel bandwidths. For free-space laser communication (lasercom) applications, links are usually point-to-point, have narrow beamwidths, and do not need to share a transmitting medium with other signals. Therefore, many lasercom applications could exploit the benefits of FSK by trading off spectral efficiency with power efficiency. This thesis investigates an FSK transmitter implementation utilizing a single, fast tunable laser, allowing scalability to high values of M-ary FSK, where M represents the number of wavelengths in the symbol constellation. This work will propose and implement a design for an FSKmodulated transmitter using a modulated-grating, y-branch tunable laser, and assess its suitability for lasercom applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Design and Fabrication Pipeline for Integrating Rotary Encoders into 3D Printed Mechanisms</title>
<link href="https://hdl.handle.net/1721.1/151882" rel="alternate"/>
<author>
<name>AlAlawi, Marwa</name>
</author>
<id>https://hdl.handle.net/1721.1/151882</id>
<updated>2023-08-24T03:02:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Design and Fabrication Pipeline for Integrating Rotary Encoders into 3D Printed Mechanisms
AlAlawi, Marwa
In this thesis, we introduce MechSense: rotary encoders 3D-printed in one pass alongside rotational mechanisms. MechSense encoders report on their angular position, direction of rotation, and speed. MechSense encoders utilize capacitive sensing by integrating a floating capacitor into the rotating element and three capacitive sensor patches in the stationary part of the mechanism. Unlike existing rotary encoders, MechSense does not require manual assembly and can be effortlessly integrated during design and fabrication. MechSense is accompanied by an editor that allows users to integrate the encoder within a rotating mechanism.&#13;
&#13;
We contribute a sensor topology and a computational model that can compensate for print deviations. We also evaluate our sensing model for angular position detection (mean error: 1.4°) across multiple prints and rotations, different spacing between sensor patches, and different sizes of sensors. Finally, we demonstrate MechSense through three application examples on 3Dprinted tools, tangible UIs, and gearboxes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preservation and Deployment of Biofertilizers to Mitigate Soil Phosphorous Loss from Agricultural Systems</title>
<link href="https://hdl.handle.net/1721.1/151881" rel="alternate"/>
<author>
<name>Barghouti, Zeina</name>
</author>
<id>https://hdl.handle.net/1721.1/151881</id>
<updated>2023-08-24T03:04:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Preservation and Deployment of Biofertilizers to Mitigate Soil Phosphorous Loss from Agricultural Systems
Barghouti, Zeina
The world is approaching peak phosphorus within the next 20 years, which poses a severe threat to the food security of a rapidly growing global population. Phosphorus is the second most essential macronutrient for plants after nitrogen, and its scarcity results in stunted growth, poor root development, and reduced crop agricultural crop yields. While phosphorus is naturally present in the soil, it is often not available in sufficient quantities or in a form that plants can synthesize. The use of phosphate-rich fertilizers has compensated for the low levels of phosphorus in agricultural systems, however, up to 95% of these fertilizers become "fixed" in the soil, causing environmental damage and reducing soil fertility. It is critical to find sustainable ways to manage our depleting phosphorus resources and minimize the environmental impact of phosphate fertilizers. To address these challenges, this research introduces a framework that leverages silk-based biopolymer encapsulation to preserve and deliver phosphate solubilizing microorganisms to the soil on naturally occurring phosphate rocks. By enabling the revival of phosphate solubilizing bacteria and initiating the solubilization of adjacent phosphate rocks, this approach not only improves the accessibility of untapped phosphate resources for plant roots but also facilitates the continual solubilization of various forms of insoluble phosphate, including legacy phosphorus from past fertilizer applications. This research showed phosphate solubilizing bacteria encapsulated in the biopolymer-coated phosphate rock remained viable after 30 days of storage and demonstrated effective solubilization of its host phosphate rock in solution. The addition of the biopolymer-coated phosphate rocks to chickpea seedlings showed a significant increase in the phosphorous content in chickpea leaves compared to the addition of uncoated rocks. Additional investigations can be undertaken to evaluate the potential of this framework as a controlled-release fertilizer by varying coating parameters such as material processing, biopolymer concentrations, and fertilizer amounts. The results of this thesis provide a foundation for further exploration and development of natural phosphate biofertilizers, bringing us one step closer to a more sustainable and resilient future in agriculture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Thermal Behavior of Pyrolytic Graphite Sheets (PGS) at Low Interface Pressures</title>
<link href="https://hdl.handle.net/1721.1/151879" rel="alternate"/>
<author>
<name>Padilla, Joushua G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151879</id>
<updated>2023-08-24T03:02:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Characterizing the Thermal Behavior of Pyrolytic Graphite Sheets (PGS) at Low Interface Pressures
Padilla, Joushua G.
As the United States Navy continues to pursue its goal of developing fully electric ships the cooling of the critical electronic components on board must be solved. One of these critical components is the integrated Power Electronics Building Block (iPEBB); a universal converter that is programmed for its specific application when installed.  The iPEBB is a modular unit that can be easily swapped by a single person.  This unique modularity has led the Navy to pursue the design of a dry interface liquid cooling system to cool the iPEBB. This means that no liquid can cross the boundary of the iPEBB and thus the cooling system must be separate.&#13;
&#13;
In this thesis, an integral portion of the dry interface cooling solution, the thermal interface material (TIM) between the cold plate and iPEBB, was explored in a multitude of ways. First, commercially available TIMs were investigated for their thermal behavior at pressures less than 10 PSI as well as their structural qualities and usability metrics. Pyrolytic Graphite Sheets (PGS) were chosen to be investigated further. Second, a fourth order thermal conductivity model for PGS as a function of interface pressure was derived in the 0 – 10 PSI range. This model is important as it allows engineers to have conductivity inputs for the PGS in any thermal modeling done for future iterations of the iPEBB or in other systems where PGS is used as a TIM. Third, the design and testing of an experimental rig (PPR) for testing thermal interface materials under various average pressures and pressure profiles was presented. An empirical model was developed that demonstrates the effect that interface pressure profile has on component temperatures with PGS as the acting TIM between the cooling solution and the heated system. Finally, using the conductivity model, CFD simulations were run of PPR experiments. These simulation results were then compared to the results of the PPR experiments and it was discovered that using the conductivity model for PGS as an input in a CFD simulation is an effective way of modeling the contact resistance of PGS as a function of pressure. The effectiveness of the conductivity model – CFD simulation setup has a mean error of 1.4C ± 1.3C between the simulation’s outputted average resistor temperature and the actual average temperatures measured.&#13;
&#13;
The experiments and simulations conducted in this thesis provide a blueprint for the necessary steps required to thermally model not only the iPEBB dry interface cooling system, but also other systems that might use PGS as a TIM, using CFD. The information in this thesis will also help researchers model the thermal behavior of the iPEBB cooling system once a clamping mechanism for the iPEBB structure is designed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the Prospects and Development of China's Online Healthcare Industry: Opportunities and Challenges</title>
<link href="https://hdl.handle.net/1721.1/151877" rel="alternate"/>
<author>
<name>Zhu, Xianmin</name>
</author>
<id>https://hdl.handle.net/1721.1/151877</id>
<updated>2023-08-24T03:53:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis of the Prospects and Development of China's Online Healthcare Industry: Opportunities and Challenges
Zhu, Xianmin
With the rapid development of the internet and technology, the healthcare industry has undergone a significant transformation in recent years. The online healthcare industry, in particular, has emerged as a promising sector in China, providing a convenient and accessible way for people to access medical services and information.&#13;
&#13;
To analyze the online healthcare industry in China, this thesis will employ two widely used frameworks: PESTEL analysis and Porter's five forces analysis. Furthermore, the thesis will select three major players in the online healthcare industry in China, namely AliHealth, Ping'an Healthcare, and JDHealth, to conduct a detailed analysis of the competition landscape of Online healthcare industry in China. The analysis will cover various aspects such as business models, product and service offerings, number of active users, and financial status using multiple metrics. Finally, the thesis will discuss the risks, challenges, and potential solutions in this industry.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Optimization of Post Disaster Relief Structure</title>
<link href="https://hdl.handle.net/1721.1/151876" rel="alternate"/>
<author>
<name>Bharmal, Sabika</name>
</author>
<id>https://hdl.handle.net/1721.1/151876</id>
<updated>2023-08-24T03:33:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Optimization of Post Disaster Relief Structure
Bharmal, Sabika
This thesis covers the design and optimization of a post-disaster relief shelter as well as a custom connection design. The goal of this work is to propose new solutions for temporary shelters and to streamline the design process. In particular, the structure is designed for flooding in Pakistan and uses steel hollow structural sections (HSS). The design works to minimize the number of unique parts, requires no power tools for assembly, utilizes all prefabricated elements, and meets the region's building codes for a typical residential home. Ultimately, the structure is a shelter that can be reused year to year by being assembled and disassembled as needed. This will help to reduce material waste and the overall effect on the environment. For the design of the structure, two different methods were employed, one focusing on parametric modeling and one focusing on repetitive elements. Designs from each method were optimized and then compared to determine the best solution. Once the top design was selected, the members in the design were grouped and then replaced based on the groups to reduce the number of unique elements. Finally, the last part of the thesis works on the design and prototyping of a custom steel node. The node is designed to connect eight HSS sections together with each element held using a single pin. Preliminary prototyping for the connection is also done using polymer and steel 3D printing methods. In conclusion, this thesis presents a workflow and design for a prefabricated shelter kit that can be assembled with no additional tools or materials while ensuring it resists all the appropriate loads for the area.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Investigating the Use of Inductive Transfer Learning and&#13;
RNN to Quantify Extreme Event Statistics of Ship Motions</title>
<link href="https://hdl.handle.net/1721.1/151874" rel="alternate"/>
<author>
<name>Kramer, Jarod</name>
</author>
<id>https://hdl.handle.net/1721.1/151874</id>
<updated>2023-08-24T03:45:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Investigating the Use of Inductive Transfer Learning and&#13;
RNN to Quantify Extreme Event Statistics of Ship Motions
Kramer, Jarod
Ship motion software has been a critical tool for designers to study the extreme responses of ships in irregular waves. These studies and simulations often take thousands of hours to predict and analyze the ship’s motion. Simulation results are often imperative to ensure the development of accurate operational guidance, typically in the form of plots, advising the crew on safe course and speed combinations to avoid dangerous roll and pitch motions. Two programs in use by the Navy to fill this need are the fast, lower-fidelity SimpleCode program and the slower, higher-fidelity Large Amplitude Motion Program (LAMP). Previous efforts have developed a framework to leverage machine learning through a Long Short-Term Memory (LSTM) network architecture to augment the SimpleCode program by mapping its ship motion output to the more accurate LAMP output without adding significant computational overhead. This process of using an LSTM neural network to improve the SimpleCode output provides the opportunity to supply predictions and guidance to the crew in real-time. However, the limits of this mapping across various sea domains still need to be discovered. By investigating these limits, a more generalized LSTM can be realized through inductive transfer learning and a model agnostic meta-learning approach, one that leverages the training of previous networks to augment SimpleCode across a broader range of seas or produce more accurate results on a narrow set of sea conditions after very few training samples.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting the Interaction Between Energy Saving Devices on Surface Ships</title>
<link href="https://hdl.handle.net/1721.1/151873" rel="alternate"/>
<author>
<name>Uzoma, Jillian</name>
</author>
<id>https://hdl.handle.net/1721.1/151873</id>
<updated>2023-08-24T03:41:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Predicting the Interaction Between Energy Saving Devices on Surface Ships
Uzoma, Jillian
Greenhouse gas reduction technology is an important area of research that spans across all industries. The looming carbon neutrality deadlines are drawing closer and change must occur in order to reduce carbon emissions and meet these goals. One of the sectors facing these deadlines is the commercial shipping industry. This research was motivated by Oldendorff Shipping Company who aims to find the best method to reduce carbon emissions on its bulk carriers. This research effort involved collaboration between various labs across MIT’s campus who are each investigating different methods of reducing carbon emissions. This thesis and my contribution to the project involved investigating the carbon emission reduction through the addition of drag reduction devices to bulk carriers.&#13;
&#13;
A literature review of existing energy saving devices was completed in order to understand what devices are in use today, how they work, how prevalent they are, and what drag reduction or energy saving claim is made. Often there was conflicting information or unfounded claims of energy savings that had been made, so this literature review also involved comparing and analyzing sources. &#13;
&#13;
Two novel energy saving devices were explored: vortex generators and a morphing bow foil. A deep dive into how each of these devices work to reduce drag was completed and experiments were carried out in the Parson’s Laboratory Towing Tank. The vortex generator designs were iterated many times to try to optimize their shape, size, spacing, and location. These results were discussed and generally show that flow reattachment occurs and once scaled up to full scale, energy savings do occur. &#13;
&#13;
Finally, this thesis explored the concept of combining multiple devices at the same time. Meaningful combinations are ones that involve differing methods of drag reduction so that the presence of both devices lead to additive savings. Three combinations 3 were explored in depth and include microbubbles with vortex generators, Grothues Spoilers and Kappel Blades, and Becker Mewis Duct with rudder bulb.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Study of the Effects of Piston Secondary Motion on Piston Ring Conformability and Coolant Cavitation in Heavy-Duty Engines</title>
<link href="https://hdl.handle.net/1721.1/151870" rel="alternate"/>
<author>
<name>Bradt, Casey S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151870</id>
<updated>2023-08-24T03:54:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Study of the Effects of Piston Secondary Motion on Piston Ring Conformability and Coolant Cavitation in Heavy-Duty Engines
Bradt, Casey S.
The transportation sector accounts for a significant share of global greenhouse gas emissions. There are a few particular modes of transportation such as shipping and long-haul trucking, which are dominated by heavy-duty diesel piston engines, that are more difficult to electrify or decarbonize via other alternatives. Because of this, environmental goals for those transportation modes is on two goals: reducing emissions and improving fuel efficiencies for existing internal combustion engine technologies. The work here presented is in two parts, each related to these two design goals. &#13;
&#13;
As future designs aspire to reduce emissions, lubricating oil consumption (LOC) is a primary concern because it is a main emissions source. Bore distortion has arguably the largest effect on LOC due to its influence on piston ring-bore conformability. Thermal distortion and out-of-roundness of the cylinder caused by head bolt stresses are routinely considered in conformability analyses for ring pack designs. However, the effect of piston impact on the bore distortion for ring-liner conformability analyses has not been addressed, even though the magnitude of the piston impact distortions can be as high as those from thermal distortions. This piston impact effect was added to existing bore distortion and ring-liner conformability analysis techniques in the work documented here. The simulation workflow incorporated piston secondary motion and oil transport, transient structural finite element analysis of the cylinder, and a curved beam ring-liner conformability model. Significantly higher ring-liner clearance and higher contact were observed. As a result, higher oil leakage, wear, and combustion gas blow-by may become substantial and design adjustments may be warranted. &#13;
&#13;
Separately, in attempt to achieve improved fuel efficiency, design efforts are often opposed by obstacles like durability issues. One such durability issue is cavitation erosion in wet liner engines. Cavitation erosion can cause tremendous damage and is often caused by vibrations in the liner driven by piston slap and other piston secondary motions. Piston secondary motion intensifies with many design trends aimed at increasing engine efficiency such as elevated combustion pressures and reduced structural weight. Thus, cavitation erosion acts as a barrier to higher efficiency designs. To prevent cavitation erosion, designers generally must find solutions based on engineering intuition paired with experiment, often sacrificing frictional performance. The work here documented developed a physics-based modeling and simulation capability to predict cavitation erosion during the design process and hopefully help overcome this barrier. A piston secondary motion software with consideration of oil transport was first used to calculate piston impact pressures. These impact pressures were then mapped to a coupled structure and fluids model of the liner and water jacket in Ansys. The developed Ansys model may employ either a one way or two-way structure-fluids coupling. A preliminary parametric study was also done to investigate the influence of various piston design parameters on the cavitation behavior.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mechanics of Seashell Growth: Examining the relationship between incompatibility, shape, and internal stress</title>
<link href="https://hdl.handle.net/1721.1/151869" rel="alternate"/>
<author>
<name>Carberry, Dylan</name>
</author>
<id>https://hdl.handle.net/1721.1/151869</id>
<updated>2023-08-24T03:37:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Mechanics of Seashell Growth: Examining the relationship between incompatibility, shape, and internal stress
Carberry, Dylan
Seashells are a fascinating example of surface growth in nature. As they develop, both their macroscopic form and their internal microstructure evolve, with the latter transitioning between different layer sizes and orientations during the shell's growth process. Several studies have examined the morphogenesis of seashells, with some considering the kinematics of growth that lead to different eventual shapes, and others investigating the biochemical pathways of these processes. However, the role of internal mechanical stresses that may develop due to incompatibility has yet to be investigated. In this thesis, we present a framework that models the shell growth continuously, with an aim to investigate the role of internal stresses on the structural changes that have been reported to occur within seashells. Considering an axisymmetric growing body and accounting for surface growth as an arbitrary sequence of addition of incompatible circular rings on its outer perimeter, we study the shape and mechanical forces that can develop throughout the shell's growth.  Our findings show that incompatibility has a large impact on the shape of a shell during surface growth, especially during early stages of development. This influence may be crucial in explaining the recorded crystallographic reorientation that is typical to various seashells.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Commissioning of a Hybrid Additive Manufacturing System Combining Inkjet Deposition and Laser Powder Bed Fusion</title>
<link href="https://hdl.handle.net/1721.1/151867" rel="alternate"/>
<author>
<name>Kutschke, Zach W.</name>
</author>
<id>https://hdl.handle.net/1721.1/151867</id>
<updated>2023-08-24T03:12:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Commissioning of a Hybrid Additive Manufacturing System Combining Inkjet Deposition and Laser Powder Bed Fusion
Kutschke, Zach W.
Capabilities to combine multiple metal and/or ceramic materials in single components, and/or to achieve desired gradients in composition, will advance the performance of future propulsion and energy conversion systems. Multi-material and gradient capabilities have been demonstrated for metals in both powder bed and directed deposition additive manufacturing (AM) techniques; however, the dimensional fidelity and spatial precision of composition control is limited for several reasons. Here the design, fabrication, and preliminary validation of a new hybrid AM system combining inkjet printing with laser powder bed fusion (LPBF) for manufacturing compositionally graded components is presented. In the hybrid inkjet-LPBF process, a pattern of ink is deposited in a two-dimensional pattern to dictate compositionally modified regions prior to, or following, the spreading of each powder layer. Solids (e.g., nanoparticles) in the ink combine with the base powder to achieve locally controlled in situ alloying within the AM process. Key design considerations for the system including thermal isolation of the inkjet system, temperature control of the build volume (up to 500C), and atmosphere control are discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Real-Time Radiation Detection within the Gastrointestinal&#13;
Tract</title>
<link href="https://hdl.handle.net/1721.1/151866" rel="alternate"/>
<author>
<name>McLymore, Crystan</name>
</author>
<id>https://hdl.handle.net/1721.1/151866</id>
<updated>2023-08-24T03:01:47Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Real-Time Radiation Detection within the Gastrointestinal&#13;
Tract
McLymore, Crystan
The risk of a radiation emergency is becoming more prevalent as the misuse of nuclear facilities or technologies by terrorists or rogue nation-states continues to increase. A radiation emergency could cause instantaneous and sustained large releases of penetrating radiation, which would result in exposed individuals to suffer from acute radiation syndrome (ARS). FDA-approved medical countermeasures to combat ARS are most effective when administered as soon as possible to the point of exposure. Current methods to prevent morbidity and mortality require access to medical support and the proper use of radiation dosimetry. This work describes radiation monitoring internal to the gastrointestinal tract, which could provide a means of alerting the individual to their surroundings or trigger a drug delivery response. &#13;
&#13;
Internal radiation monitoring also has benefits in radiation therapy applications where injury to the gastrointestinal (GI) tract remains an unavoidable side effect due to its extension over a large surface area. Current in vivo dosimetry technology is only positioned in minimally invasive areas to monitor radiation, which increases the likelihood of delivered dose discrepancies in or near the treatment area. This work overcomes this limitation by demonstrating the use of PIN diode-based ingestible electronics to monitor radiation as required throughout the gastrointestinal tract.  &#13;
&#13;
The diode was first characterized in vitro for response to X-ray and gamma radiation while in temperature environments of 20°C to 40°C. Various sources were employed for characterization, including 2525 Curie Cesium, 2100 Curie Cobalt, 320 kV X-ray irradiator, linear accelerator (LINAC) with 6, 10, and 18 MV beam qualities, and a neutron beam sourced by a 5.7 MW nuclear reactor. An in vivo study was then performed in which the encapsulated diode was placed in a swine’s stomach, and 110 kVp X-ray images were captured of the swine’s abdominal region.&#13;
&#13;
The diode displayed repeatability within 3\% in its detection of the tested gamma and X-ray sources. The diode also proved to be energy independent for absorbed doses less than 3.5 Gy, evidenced by the LINAC characterization. Radiation absorption in body tissue had a dominating effect on the diode output signal, as shown by comparing the in vitro to in vivo results. &#13;
&#13;
This study demonstrates successful, first-time in situ radiation detection directly from core body areas in a non-invasive manner. Real-time feedback on the received radiation dose to the GI tract allows for active monitoring of GI doses.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Additively Manufacturing High-Performance, Low-Cost Electrospray Ion Sources for Point-of-Care Mass Spectrometry</title>
<link href="https://hdl.handle.net/1721.1/151864" rel="alternate"/>
<author>
<name>Kachkine, Alex</name>
</author>
<id>https://hdl.handle.net/1721.1/151864</id>
<updated>2023-08-24T03:01:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Additively Manufacturing High-Performance, Low-Cost Electrospray Ion Sources for Point-of-Care Mass Spectrometry
Kachkine, Alex
Clinical mass spectrometry relies on ionization of liquid biological samples, often via electrospray. This work broadly leverages additive manufacturing for the development of electrospray emitters, doubling signal-to-clutter ratios relative to state-of-the-art. We demonstrate low-cost integration in clinically-relevant diagnostics protocols by designing emitters into surface mount devices, the first of their kind, that can be directly soldered to printed circuit boards with built-in digital microfluidics as part of automated device assembly. The benefits in terms of scalability of this solution are coupled with advantages gained from simultaneously tuning surface hydrophilicity, solvent evaporation, and geometry. Electrospray emitter efficiency is optimized, approaching the direct field ion evaporation limit. Several materials and additive manufacturing processes to make the electrospray emitters are evaluated; comparative testing is conducted with conventional paper spray and coated blade spray. Microstructure characterization with scanning electron microscopy shows reproducible microfabrication of bulk techniques and compatibility with additive manufacturing feedstock. Geometrically and electro-fluidically optimized electrospray emitters attain 130% higher steady-state currents than state-of-the-art emitters. The devices use novel extractor electrode designs, reducing corona discharge and air breakdown, enabling operation at ~24% larger bias voltages compared to conventional cylindrical inlets. MS data is presented for ZnONW-coated emitters, detecting therapeutically relevant targets at 1 µg/ml concentrations with a variety of solvents. In the case of Nicardipine, such emitters attain 99% higher signal-to-clutter ratios versus state-of-the-art, with far greater operative stability. This thesis bridges the gap between additive manufacturing and high-performance electrospray for mass spectrometry, unlocking industrial development of clinically relevant, next-generation point-of-care ion sources.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Efficient Uncertainty Quantification of Turbulent Combustion Simulations via Kinetic Dimension Reduction</title>
<link href="https://hdl.handle.net/1721.1/151863" rel="alternate"/>
<author>
<name>Koenig, Benjamin C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151863</id>
<updated>2023-08-24T03:25:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enabling Efficient Uncertainty Quantification of Turbulent Combustion Simulations via Kinetic Dimension Reduction
Koenig, Benjamin C.
Propagating uncertainties in kinetic models through combustion simulations can provide important metrics on the reliability and accuracy of a model, but remains a challenging and numerically expensive problem especially for large kinetic mechanisms and expensive turbulent combustion simulations. Various surrogate model and dimension reduction techniques have previously been applied in order to reduce the cost of forward uncertainty propagation in combustion simulations, but these are often limited to low-dimensional, simple combustion cases with scalar solution targets. In the current work, a neural network-accelerated framework for identifying a low-dimensional active kinetic subspace was developed that applies to the entire temperature solution space of a flamelet table and can capture the mixture fraction and strain rate dependent effects of the kinetic uncertainty. The computational savings enabled by this novel framework were demonstrated through a proof-of-concept, flamelet-based application in a Reynolds-averaged Sandia Flame D simulation using a chemical mechanism for methane combustion with 217 reactions. By leveraging the large dimensional compression and low-cost scaling of the active subspace method, offloading the initial dimension reduction gradient sampling onto the laminar flamelet simulations, and accelerating the gradient sampling process with a specifically designed neural network, it was possible to estimate the temperature uncertainty profiles across the solution space of the turbulent flame with strong accuracy of 70-85% using just seven perturbed solutions. Additionally, as it occurs entirely within the flamelet table, the cost of identifying the reduced subspace does not scale with the cost of the turbulent combustion model, which is a promising feature of this framework for future application to larger-scale and more complex turbulent combustion applications.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>3D-Printed, Internally Fed Electrospray Thruster</title>
<link href="https://hdl.handle.net/1721.1/151862" rel="alternate"/>
<author>
<name>Kim, Hyeonseok</name>
</author>
<id>https://hdl.handle.net/1721.1/151862</id>
<updated>2023-08-24T03:27:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">3D-Printed, Internally Fed Electrospray Thruster
Kim, Hyeonseok
An electrospray thruster offers several benefits as a propulsion system for small satellites, including a lower power requirement when miniaturized and a broad range of thrust and specific impulse. However, traditionally it has been manufactured through microfabrication in a cleanroom, which is both expensive and time-consuming, and is not compatible with in-space manufacturing. Advances in 3D printing technology make it possible to create microstructures at a much lower cost than microfabrication; however, internally fed electrospray thrusters have only been fabricated in a cleanroom so far, primarily due to their high hydraulic resistance requirement. In this study, this problem was approached in two ways to 3D print the internally fed electrospray thruster. The first approach was optimizing the channel design, considering 3D printing resolution and electrospray physics. The second approach was the modification of liquid resin for 3D printing to expand the lower limit on the internal channel size. The characterization of a single-emitter device showed stable emission for multiple flow rates, with current and flow rate following the well-known scaling law of electrospray in cone-jet mode. The thrust and specific impulse estimates showed that the device performance is comparable to state-of-the-art microfabricated internally fed electrospray thrusters.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards low-cost context awareness on smart shelving using passive UHF RFID infrastructure</title>
<link href="https://hdl.handle.net/1721.1/151861" rel="alternate"/>
<author>
<name>Li, Heyi</name>
</author>
<id>https://hdl.handle.net/1721.1/151861</id>
<updated>2023-08-24T03:09:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards low-cost context awareness on smart shelving using passive UHF RFID infrastructure
Li, Heyi
An omni-channel strategy is a method of selling and promoting products that offers customers a comprehensive and cohesive shopping experience. However, this strategy relies on store managers having an accurate, real-time understanding of product availability at all their distribution and retail facilities. Smart shelving is an important avenue for furthering the development of omni-channel retailing and meeting people’s needs. This thesis primarily focuses on the construction of a low-cost context awareness infrastructure for smart shelving using passive UHF RFID tags and radio tomographic imaging (RTI) algorithms.&#13;
&#13;
Firstly, location estimations without fingerprinting in one direction can reach an accuracy of 91.7% on four tested objects. Secondly, the number of stacked layers from 1-3 when placing items on the shelf can be estimated. It is shown that an increase in product volume on the shelf could be related to tag RSSI level changes for five different tested products.&#13;
&#13;
In addition, material classification could be achieved by tag RSSI attenuations. Tests are done between three classes (metal, glass, and plastic), with three objects each class. In the three-location tests, it is possible to clearly differentiate between three types of materials based on the value of variations in tag RSSI attenuations.&#13;
&#13;
Finally, the integration of battery-free environmental sensors is accomplished by incorporating an RFID tag equipped with resistance measurement capability and a photoresistor. By measuring the resistance of the photoresistor, the designed light sensor could provide additional information (besides the tag RSSI change) about the volume of material on a shelf. Moreover, this can be done using only a single UHF RFID Gen 2 protocol.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High energy density entrainment-based catalytic micro-combustor for portable devices in extreme environmental conditions</title>
<link href="https://hdl.handle.net/1721.1/151860" rel="alternate"/>
<author>
<name>Lin, Emily</name>
</author>
<id>https://hdl.handle.net/1721.1/151860</id>
<updated>2023-08-24T03:09:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High energy density entrainment-based catalytic micro-combustor for portable devices in extreme environmental conditions
Lin, Emily
The increasing demand for low-cost, high energy density heat sources has motivated the development of compact and lightweight combustion-based devices. In this work, we first optimized the catalytic bed segmentation scheme to enhance fuel management in a mesoscale parallel-plate combustor. After contextualizing the driving parameters for combustion efficiency, we developed an energy-dense (≈236 MW/m³) entrainment-based catalytic micro-combustor for heating portable systems. The multichannel micro-combustor (coated with Pt/Al₂O₃ catalyst) leverages a copper-nichrome wire to enable quick and localized ohmic preheating durations (2-3 mins). Furthermore, we demonstrated low ignition temperature (108-125°C), which facilitates low energy consumption (~1948 J). In addition, an optimal fuel flow rate (3.09×10⁻⁸ m³/s) was determined via FEM simulations and experiments to enable fuel savings (high fuel conversion) while achieving high heat fluxes by analyzing the reaction kinetics and species transport behavior in the microchannels. Additional FEM studies were performed to optimize the heat transfer between the high thermal mass and combustor at the insulating mica sheet stack interface. Afterwards, through independent testing, we established the micro-combustor’s ability to maintain long-term autothermal combustion at a high saturation wall temperature (585°C), which was attained at short timescales to enable fast heating/cooling cyclability. The successful cyclic heating demonstration of large thermal mass additions (at least 41 times the micro-combustor’s mass), coupled with the combustor’s high energy density, shows promise for device-level implementation for a range of commercial, defense, and energy conversion applications. Finally, a combustor array was assembled and tested in an atmospheric water extractor (AWE) device in harsh environmental conditions, at temperatures ranging from 1.7°C to 43.3°C.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Internal Combustion Engine Performance using Aluminum as Fuel</title>
<link href="https://hdl.handle.net/1721.1/151859" rel="alternate"/>
<author>
<name>Pratto, Linda</name>
</author>
<id>https://hdl.handle.net/1721.1/151859</id>
<updated>2023-08-24T03:26:32Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Internal Combustion Engine Performance using Aluminum as Fuel
Pratto, Linda
The aluminum-water reaction has been proven as a concept for a safe, economical, and energy-dense storage mechanism for hydrogen fuel. One of the challenges facing aluminum-fuel technology is the sensitivity of hydrogen fuel cells to temperature, humidity, vibrations, and particulate contamination. This paper explores internal combustion engines as an alternative energy conversion method to hydrogen fuel cells for aluminum-fuel applications. Specifically, this paper characterizes the impact of steam on engine performance. The aluminum-water reaction is highly exothermic, resulting in a high-temperature mixture of steam and hydrogen. In a fuel cell system, additional components are required to cool and dry the hydrogen which adds cost, weight and complexity. On the other hand, the higher temperature and steam content does not reduce the ability of the internal combustion engine to produce work up to molar water-fuel ratios of approximately 2.5. This work documents analytical predictions and experimental results to characterize the performance impact of steam on hydrogen internal combustion engines for use with aluminum fuel. For port-fueled injected engines, the prescience of steam reduces engine efficiency by about 8%, but increases the overall system efficiency by about 9%. For direct-injection engines, the prescience of steam increases engine efficiency by about 9% and increases overall system efficiency by 13%.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bayesian optimization and Cartesian-grid simulations for artificial reef design</title>
<link href="https://hdl.handle.net/1721.1/151858" rel="alternate"/>
<author>
<name>Ronglan, Edvard</name>
</author>
<id>https://hdl.handle.net/1721.1/151858</id>
<updated>2023-08-24T03:40:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bayesian optimization and Cartesian-grid simulations for artificial reef design
Ronglan, Edvard
Coastal erosion threatens communities close to the shore worldwide, and it has become&#13;
a significant concern in recent years due to increased sea levels and storm frequency&#13;
driven by global warming. In the search for effective methods to prevent these effects,&#13;
natural coral reefs have demonstrated comparable wave energy dissipation to artificial&#13;
defenses while also providing a positive influence on the ocean ecosystem. Therefore,&#13;
this thesis presents an artificial reef structure with a drag coefficient that is an order of magnitude higher than that of single structures, which positively impacts the&#13;
ocean ecosystem by providing shelter for marine species. Energy dissipation was maximized using Bayesian optimization in combination with Cartesian-grid simulations&#13;
and towing tank experiments. To ensure the structure’s strength, ease of implementation, and biocompatibility, the reef structures were designed to be porous. Finally,&#13;
the complete artificial reef was constructed and tested in a towing tank with waves&#13;
to assess its energy dissipation capabilities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of Experimental Conditions on Fracture Research Using 3D Printed Materials</title>
<link href="https://hdl.handle.net/1721.1/151857" rel="alternate"/>
<author>
<name>Almubarak, Majed Abdulsattar</name>
</author>
<id>https://hdl.handle.net/1721.1/151857</id>
<updated>2023-08-24T03:07:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Effects of Experimental Conditions on Fracture Research Using 3D Printed Materials
Almubarak, Majed Abdulsattar
The fracturing behavior and mechanical characterization of rocks are important for many applications in the fields of civil, mining, geothermal, and petroleum engineering. Laboratory testing of rocks plays a major role in understanding the underlying processes that occur on the larger scale and for predicting rock behavior. Fracturing research requires well-defined and consistent boundary conditions. Consequently, the testing design and setup can greatly influence the results.&#13;
&#13;
In this study, a comprehensive experimental program using an artificial material was carried out to systematically evaluate the effects of different parameters in rock testing under uniaxial compression. The parameters include post-processing curing, printing orientation, compression platen type, specimen centering, loading control method and rate, specimen size, specimen cross-sectional geometry, boundary constraints, and flaw parameters.&#13;
&#13;
The specimens were prepared using a 3D stereolithography printer utilizing clear resin material. Identical pre-existing quasi-elliptical (ovaloid-shaped) flaws were placed in the center of each specimen. The specimens were subjected to unconfined compression using a Baldwin load frame. The testing setup included a high-speed camera and a high-resolution camera for visual analysis of the fracturing processes.&#13;
&#13;
The results show that these testing conditions have a significant effect on the mechanical behavior of rocks. Post-processing curing increases the strength of the material, with longer curing times resulting in higher material strength. Different printing orientations exhibit varying strengths. Using a fixed compression platen helped reduce bulging of the material. Centering of the specimen played a critical role to avoid buckling and unequal distribution of stress. Slower displacement rates can control the energy being released once failure occurs to prevent the specimen from exploding. Larger specimens generally fail at lower stresses compared to smaller specimens. Also, the frictional end effects were investigated by comparing lubricated and non-lubricated end conditions. Very importantly, the study also identified variations in crack initiation and propagation between specimens with internal flaws and specimens with throughgoing flaws. This investigation showed that tensile wing cracks appeared in specimens with throughgoing flaws, while wing cracks with petal cracks were associated with the internal flaws. It also showed that the mechanical properties are influenced by the inclination of the flaws and established that specimens with internal flaws generally exhibit higher material strength compared to specimens with throughgoing flaws.&#13;
&#13;
The systematic analysis presented in this work sheds light on important considerations that need to be taken into account when conducting fracture research and adds knowledge to the fundamental understanding of how fractures occur in nature.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Oscillating Energy Harvester for UUV Applications</title>
<link href="https://hdl.handle.net/1721.1/151856" rel="alternate"/>
<author>
<name>Stone, Lucas Kistner</name>
</author>
<id>https://hdl.handle.net/1721.1/151856</id>
<updated>2023-08-24T03:56:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Oscillating Energy Harvester for UUV Applications
Stone, Lucas Kistner
This thesis presents the design, modeling, and optimization of a novel oscillating energy harvester for use in a Bluefin-21 UUV. Real-world vessel acceleration data was used to optimize the harvester for four different potential energy profile configurations: free-floating, linear monostable, nonlinear monostable, and bistable. Active control was desired and two strategies were explored but deemed to be too costly to implement. The performance of each configuration was evaluated and it was found that the linear monostable model performed the best, although, due to detuning concerns, the free-floating configuration is expected to out perform the linear model across a range of sea state spectra. While the calculated power collection rate was insufficient for supplementing or recharging the main batteries, the harvester was found to be a promising alternative power source for an emergency location beacon, enabling continuous transmission as long as the UUV remained adrift. The findings of this thesis demonstrate the potential of oscillating energy harvesters in UUV applications and suggest avenues for further research into control strategies and experimental validation. &#13;
&#13;
Keywords: oscillating energy harvester, UUV, floating, linear, nonlinear, monostable, bistable, control strategy
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Development and Application of&#13;
Elastohydrodynamic Lubrication Model for Piston&#13;
Pin</title>
<link href="https://hdl.handle.net/1721.1/151855" rel="alternate"/>
<author>
<name>Shu, Zhiyuan</name>
</author>
<id>https://hdl.handle.net/1721.1/151855</id>
<updated>2023-08-24T03:02:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Development and Application of&#13;
Elastohydrodynamic Lubrication Model for Piston&#13;
Pin
Shu, Zhiyuan
The piston pin, as the connection between the piston and the connecting rod, is a&#13;
crucial component in the internal combustion engine. It transfers the cylinder pressure&#13;
of combustion to the crankshaft and is subjected to high stress and harsh lubrication&#13;
conditions. Pin seizure is a severe problem in new engine development and coatings&#13;
could be a solution to this problem. However, by advancing the knowledge about the&#13;
lubrication effect and the contact patterns on the pin’s surface, it is possible to find&#13;
more cost-effective methods, such as modifying the profile or adding oil grooves.&#13;
&#13;
A numerical model was developed in this study to investigate the lubrication and&#13;
dynamics of the piston pin, taking into account the deformation of the structures and&#13;
oil cavitation. The model employs multi-body dynamics and elasto-hydrodynamic&#13;
lubrication. A routine for generating and processing compliance matrices was created&#13;
and improved. Additionally, a simple built-in run-in model was utilized to modify&#13;
the pin bore and small end’s profile based on asperity contact pressure. In order to&#13;
adapt to various oil supply situations, a method for controlling the boundary oil flow&#13;
on the piston pin’s surface was also implemented.&#13;
&#13;
The model was then applied to a large bore gas engine to simulate the piston pin’s&#13;
rotation and frictional forces under different operating conditions. The simulation&#13;
results indicate that hydrodynamic lubrication plays a dominant role in supporting&#13;
the normal load after break-in, and the direction and angular speed of the piston pin’s&#13;
rotation are closely linked to the operating conditions. The experimental results were&#13;
compared to the simulation, revealing the model’s reliability and accuracy.&#13;
&#13;
The second part of the thesis examines the oil supply boundary conditions at the&#13;
boundaries of the lubrication areas. A computational fluid dynamics (CFD) model&#13;
was established to analyze the flow of lubricating oil at the vicinity of the pin joints,&#13;
which reveals that the amount of lubricating oil supplied from different locations can&#13;
vary. It was found that during high-speed reciprocating motion, lubricating oil may&#13;
not be able to remain on the piston pin’s surface long enough, particularly at top and&#13;
bottom. Lubricating oil flow, contact and friction patterns with different oil supply&#13;
conditions were analyzed and compared in a heavy-duty diesel engine model.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design Change Propagation in Complex Systems:&#13;
Industry Processes and Perceptions</title>
<link href="https://hdl.handle.net/1721.1/151854" rel="alternate"/>
<author>
<name>Willis, Robin</name>
</author>
<id>https://hdl.handle.net/1721.1/151854</id>
<updated>2023-08-24T03:33:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design Change Propagation in Complex Systems:&#13;
Industry Processes and Perceptions
Willis, Robin
Unpredicted change propagation remains a large issue for industries dealing with complex system design despite the numerous tools and processes developed by academics and professionals to help mitigate its effects. All changes, and change propagations, are not necessarily undesirable since change is instrumental to innovation, but left unchecked, unpredicted change propagations in particular can snowball—creating significant cost increases, time delays, and quality downgrades. This study presents the findings stemming from interviews with 32 experienced technical professionals, all of whom are accustomed to working on design in complex systems. There was no standardized formal approach to these problems that had been adopted by a majority of organizations within this study, but several themes were consistently present. Informal communication between people on teams was the backbone to collaboration on these types of projects, even when formal change management structures were put in place. Consequently, the state of the relationships between people working on the same project could have an impact on the project’s outcome. Furthermore, the culture surrounding change at the organization can also have an impact, as it shaped how design change management activities were seen and the effect they could have on people’s careers. Time constraints added pressure to every situation they were a part of, and could prevent “best practices” or new processes and tools from being enacted. This study introduces (1) several potential Change Propagation Risk Factors to help shed light on which types of project circumstances have the highest influence on how much of an issue unpredicted change propagation is for that workplace, and (2) the Project Management Pyramid, a new visualization of ties between project resources based on the Project Management Triangle, but with the added dimension of professional relationships. This area of research would benefit from future works that gather a higher number of data points than the interview format supports, apply objective forms of measurement to confirm consequences and situations described by industry professionals, or develop a reliable metric for individual contributions to system level successes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Packaging Design for Remote Clinical Trial Operations</title>
<link href="https://hdl.handle.net/1721.1/151853" rel="alternate"/>
<author>
<name>Noh, Joyce</name>
</author>
<id>https://hdl.handle.net/1721.1/151853</id>
<updated>2023-08-24T03:26:01Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Packaging Design for Remote Clinical Trial Operations
Noh, Joyce
As remote clinical trials continue to revolutionize the old-fashioned clinical trials model and increase patient enrollment, it has become apparent that there is no regulatory framework in place to standardize their utilization. In traditional clinical studies, there is very little independent user control and, therefore, error as a designated clinician performs all physiological measurements on each subject in person. However, in remote clinical settings, there is an added component of having to collect each participant’s physiological data and other necessary information over distance. This introduces the need for a “trial in a box” or remote clinical trial kits. Both the participant and the facilities in charge of the kits play a role in how these kits need to be designed, manufactured, and handled. However, there is a clear lack in the standardization of kit design.&#13;
&#13;
This project provides a framework for human-subjects researchers to establish their own remote clinical trial operations. This thesis, specifically, focuses on designing the clinical trial kits that could be used for the trials mentioned in this case while also detailing design decisions in order to standardize kit design for other remote clinical trials.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inequities in Air Pollution Exposure in the U.S.: An&#13;
Exploration of Disparity Metrics Across Geographic&#13;
and Temporal Scales</title>
<link href="https://hdl.handle.net/1721.1/151850" rel="alternate"/>
<author>
<name>Chen, Christina</name>
</author>
<id>https://hdl.handle.net/1721.1/151850</id>
<updated>2023-08-24T03:32:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Inequities in Air Pollution Exposure in the U.S.: An&#13;
Exploration of Disparity Metrics Across Geographic&#13;
and Temporal Scales
Chen, Christina
In the United States (U.S.), exposure to ambient &#119875;&#119872;₂.₅ – fine particulate matter smaller than 2.5 micrometers in diameter– is responsible for the largest share of premature deaths associated with air pollution. Despite declines in average annual concentrations, significant disparities in &#119875;&#119872;₂.₅ exposure between racial and ethnic groups continue to persist. Existing research characterize &#119875;&#119872;₂.₅ exposure disparities across a range of different indicators, but few studies compare these metrics against one another nor do these studies explore these metrics at different geographic scales and demographic shifts over time. As policy makers begin to prioritize environmental justice concerns through the identification of disproportionately impacted communities, careful selection of indicators and metrics will be vital for ensuring that inequities are properly captured in decision making processes.&#13;
&#13;
Using population demographics from the U.S. Census and land-use regression &#119875;&#119872;₂.₅ concentration estimates from the Center for Air, Climate, and Energy Solutions (CACES), we compare the calculations of absolute and relative exposure disparities at different geographic scales and changing demographic shifts. Further, we discuss the policy implications of our findings and provide recommendations for both regulatory and community centered measures to address existing racial/ethnic disparities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Privacy Law in Practice: Exploring Challenges to Modern Privacy Compliance</title>
<link href="https://hdl.handle.net/1721.1/151849" rel="alternate"/>
<author>
<name>Gulati-Gilbert, Sukhi</name>
</author>
<id>https://hdl.handle.net/1721.1/151849</id>
<updated>2023-08-24T03:57:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Privacy Law in Practice: Exploring Challenges to Modern Privacy Compliance
Gulati-Gilbert, Sukhi
Modern privacy legislation covers a broad data scope and introduces technically challenging data management requirements. Computer science research has emerged to resolve technical challenges, but proposed system designs could benefit from deeper understandings of user workflows. Existing qualitative work to understand privacy compliance on the ground gives both reason for optimism and alarm. There is a growing community of knowledgeable privacy professionals, but their effectiveness is hindered by organizational dynamics. We conduct 10 semi-structured interviews of privacy experts to further understand challenges faced by privacy practitioners. We find key challenges arising primarily from misaligned organizational incentives and difficulty in policy interpretation. We urge organizations to invest in and empower privacy engineers, researchers to explore different design directions, and policymakers to enable greater user recourse against corporations. We hope our work can help enable privacy respecting institutions and systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>U.S. AI Policy - A Balancing Act</title>
<link href="https://hdl.handle.net/1721.1/151848" rel="alternate"/>
<author>
<name>Hetrick, Ryan T.</name>
</author>
<id>https://hdl.handle.net/1721.1/151848</id>
<updated>2023-08-24T03:07:28Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">U.S. AI Policy - A Balancing Act
Hetrick, Ryan T.
Artificial intelligence policy is emerging as a critical component of U.S. strategy and strategies for countries around the world. What type of AI policy will allow the United States to continue to lead the world in AI innovation while doing it in an ethical and responsible manner? This work compares and contrasts 13 different countries and how each government approaches innovation, regulation, government funding, and law scope in the field of artificial intelligence. A significant portion of this analysis evaluates the tradeoffs that come with AI policies and their effects on society. Considering these tradeoffs, the U.S. needs to ensure that innovation in the field of artificial intelligence remains the top priority, while at the same time balancing the ethical deployment of AI to protect U.S. citizens. With China on the heels of the United States in terms of artificial intelligence capabilities, the United States needs to innovate more in the fields of foundation models, generative AI, human machine interaction, natural language processing (NLP), computer vision, and other emerging areas of artificial intelligence as well.&#13;
&#13;
This thesis takes an in-depth analysis of foundation models and generative artificial intelligence, while highlighting their importance and demonstrating their potential impact in the future. At the end of this body of work, there is a proposed Bill to U.S. lawmakers and Congress, titled “The Artificial Intelligence Startup, Innovation, Defense, Industry, and Academia Act (AI STIDIA Act)” that proposes a strategy for the United States to drive significant innovation in the field of artificial intelligence while deploying it in an ethical and responsible manner. The United States needs to prioritize ethical innovation in the field of artificial intelligence and cannot afford to emplace ineffective regulatory frameworks that curtails innovation. There will be a time when there is proper technology to extensively regulate artificial intelligence; however, there is not sufficient technology to extensively regulate AI as I publish this thesis. As the United States aims to generate the most innovative AI systems and create a culture that encourages the ethical deployment of AI, we should learn from past successes and failures when innovating technology. The United States needs to focus on creating AI technologies that enhances the wellbeing of U.S. citizens and people around the world.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unlocking the Potential of Hydrogen in Intermittent Electricity Systems: A Global Assessment of Levelized Cost of Hydrogen and Low Carbon Industrial Hub Profitability</title>
<link href="https://hdl.handle.net/1721.1/151847" rel="alternate"/>
<author>
<name>Liu, Qingyang</name>
</author>
<id>https://hdl.handle.net/1721.1/151847</id>
<updated>2023-08-24T03:15:48Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unlocking the Potential of Hydrogen in Intermittent Electricity Systems: A Global Assessment of Levelized Cost of Hydrogen and Low Carbon Industrial Hub Profitability
Liu, Qingyang
Recently, numerous countries have announced ambitious hydrogen production targets among clean energy transition objectives, recognizing the potential of hydrogen in decarbonization. However, significant uncertainty remains regarding the cost predictions for hydrogen production and the economic viability of green hydrogen-enabled industrial hubs with higher levels of intermittent renewable energy penetration. This study focuses on assessing the levelized cost of hydrogen generated through polymer electrolyte membrane electrolysis, accounting for regional variations, technology learning, energy intermittency and policy incentives such as those provided by the Inflation Reduction Act. We also evaluate the profitability and market viability of utilizing co-located hydrogen to decarbonize Aluminum and steel production in renewable-powered industrial hubs across various suitable regions worldwide. To accomplish this, we develop a generalizable cost model that identifies the optimal hydrogen production capacity factor and levelized cost of hydrogen under different levels of grid electricity volatility, and construct a regional hour-by-hour prioritized dispatch model to simulate a low-carbon industrial hub primarily powered by wind and solar supported by storage and firming. The results demonstrate that with the regions considered, the levelized cost of hydrogen is consistently high till 2040, but can be reduced to meet the $2/kg production cost target in the coming years through operating capacity optimization and the implementation of policy incentives. Besides, the optimal capacity leading to the lowest levelized cost of hydrogen is negatively correlated with electricity price volatility, highlighting hydrogen’s potential as a cost-effective means to absorb fluctuations in grid electricity prices. Moreover, our analysis reveals that for industrial hubs, hydrogen is the most economically viable when integrated with an industry where hydrogen serves both as a material input and as a storage mechanism, as exemplified by green steel manufacturing with the hydrogen-based direct reduced iron-electric arc furnace process. Finally, an analysis on past policies, geopolitical interests, and resource exploitation in developing countries associated with hydrogen highlights additional political and social considerations in hydrogen policymaking from a global development perspective.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Systems-Level Analysis of Algorithmic Regulation</title>
<link href="https://hdl.handle.net/1721.1/151846" rel="alternate"/>
<author>
<name>Yew, Rui-Jie</name>
</author>
<id>https://hdl.handle.net/1721.1/151846</id>
<updated>2023-08-24T03:16:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Systems-Level Analysis of Algorithmic Regulation
Yew, Rui-Jie
Algorithmic tools are being wielded in the name of regulatory values as regulatory tools are lagging in mitigating the impacts of algorithmic systems. In this thesis, I characterize and evaluate the systemic relationship between regulation and algorithmic technologies in two parts. In Part I, I uncover the current mismatched application of laws to algorithmic systems and propose resulting implications and mitigations. In Part II, I consider regulatory design for emerging technologies that incentivizes efforts toward increasing the foreseeability of harm. &#13;
&#13;
While each chapter centers the interplay between different regulations and algorithmic technologies, the problems that are uncovered and the solutions proposed generalize to reasoning about algorithmic regulation as a whole. This analysis highlights the unexpected ways that regulations can shape incentives for algorithmic development, as well as the unexpected ways that algorithmic innovation can spark regulatory innovation.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Microvoid Localization with Explicit&#13;
Finite Element Analysis</title>
<link href="https://hdl.handle.net/1721.1/151845" rel="alternate"/>
<author>
<name>Snow, Brandon D.</name>
</author>
<id>https://hdl.handle.net/1721.1/151845</id>
<updated>2023-08-24T03:24:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Microvoid Localization with Explicit&#13;
Finite Element Analysis
Snow, Brandon D.
Ductile fracture is characterized by the micromechanisms of void nucleation, growth, and coalescence. Numerical modeling of these micromechanical phenomena has typically been performed with an implicit finite element analysis (FEA) solver, which accurately produces quasi-static results, but introduces limitations in terms of computational expense and the ability to model highly non-linear processes. This has resulted in the majority of studies being restricted to single void representative volume elements (RVEs) under applied stress-states with moderate to large triaxialities. In this thesis, those limitations are largely overcome by solving microvoid localization problems with the explicit FEA method. A general framework for applying periodic boundary conditions and controlling the RVE-average stress-state is introduced and applied to both implicit and explicit FEA. Then, using the framework, a comparison of implicit and explicit FEA simulations of microvoid localization demonstrates that quasi-static results can be produced using the explicit FEA method with a significant reduction in computational cost. General guidelines for performing quasi-static explicit FEA simulations of RVE behavior are established that should be generally applicable to a wide range of micromechanics problems. Lastly, the explicit FEA method is applied to more complex RVEs which contain internal elastic particles. The simulations demonstrate that the presence of internal particles can accelerate microvoid localization under low triaxiality deformation, especially for low strain hardening materials. The results of the particle-containing RVE simulations have implications for precipitation strengthened alloys as well as metal matrix composites that utilize hard particles/phases to increase strength. The simulation capabilities introduced in this work highlight new opportunities to improve our collective understanding of ductile failure which will hopefully lead to better predictive capabilities and new materials designed to resist fracture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effect of 3D Architecture on Energy Dissipation during High-speed Particle Impact</title>
<link href="https://hdl.handle.net/1721.1/151844" rel="alternate"/>
<author>
<name>Butruille, Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/151844</id>
<updated>2023-08-24T03:17:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Effect of 3D Architecture on Energy Dissipation during High-speed Particle Impact
Butruille, Thomas
Ultralight mechanical metamaterials enabled by additive manufacturing (AM) have previously achieved density-normalized strength and stiffness properties that are inaccessible to monolithic materials, but the majority of this work has focused on static loading while the mechanical properties of these metamaterials under extreme dynamic loading conditions has remained largely unexplored. Here, using supersonic microparticle impact, the impact response of different 3D-printed microscale architectures are compared to each other and a non-architected mass equivalent sample to examine the effect of architecture on material impact response. This response is analyzed in a mass-normalized context and a dimensionless context analogous to (spatially confined) planetary impact. Ultra high-speed imaging and post-impact scanning electron microscopy reveal qualitative differences in the energy dissipation mechanisms present between impacts on architected and bulk materials. Additional uniaxial compression experiments on equivalent architected samples help separate the energy dissipation components during impact. This investigation could lead to improvements in the design process of lightweight materials for energy mitigation applications such as armor and protective coatings.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Application of A System Dynamic Model on U.S. Regional Real Estate Industry</title>
<link href="https://hdl.handle.net/1721.1/151843" rel="alternate"/>
<author>
<name>Zhang, Tianyi</name>
</author>
<id>https://hdl.handle.net/1721.1/151843</id>
<updated>2023-08-24T03:26:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Application of A System Dynamic Model on U.S. Regional Real Estate Industry
Zhang, Tianyi
This research strives to explore and simulate the dynamics of regional real estate markets within the United States using the system dynamics methodology. Building upon the original model by John Sterman, the study expands it by introducing new structures related to construction-in-progress, unit prices, alternative funds, and sales. The model undergoes calibration utilizing historical data from 1975 to 2021, with a focus on its capacity to simulate key parameters such as start rate, construction-in-progress rate, construction rate, and price. Although calibration fitness demonstrates a reliable match for trends, it exhibits limitations in representing dynamics over short time periods and seasonality. Utilizing the calibrated model, the study generates forecasts for future real estate market trends under three scenarios: baseline, standard growth, and elevated interest rate. The forecast results emphasize the influential role of space demand, the effect of interest rates on prices, and the reinforcing feedback loop of future prices. The study highlights potential avenues for model enhancement and establishes a foundation for subsequent research.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Empirical Evaluation of Social Network Sensors on Twitter During the Russia-Ukraine Conflict</title>
<link href="https://hdl.handle.net/1721.1/151840" rel="alternate"/>
<author>
<name>Ahlers, Miranda Nicolle</name>
</author>
<id>https://hdl.handle.net/1721.1/151840</id>
<updated>2023-08-24T03:41:47Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Empirical Evaluation of Social Network Sensors on Twitter During the Russia-Ukraine Conflict
Ahlers, Miranda Nicolle
The immense magnitude of information sharing, paired with increased privacy considerations, has rendered global monitoring of social media platforms virtually infeasible. Heuristic algorithms grounded in the friendship paradox have provided simple, accessible methods for strategic sampling of information from platforms while only requiring knowledge of the local network structure. However, it still remains unclear how well such algorithms perform in contexts where the spread of information consists of exogenous and endogenous modes of propagation.&#13;
&#13;
Herein, I evaluate the ability of randomly selected friends of random users to provide early awareness of discussions related to the Russia-Ukraine conflict on Twitter. I find that while selected sensors are more centrally located within the Twitter network, they fail to reliably provide early awareness of conflict-related hashtags. Lack of performance is exacerbated when only early adopters from each group are included in evaluations. Additionally, I find that the difference in time of adoption between control and sensor groups provides limited information about how popular a hashtag will become. Further, I propose a framework for using early participation in conflict discourse to condition the selection of sensors for future war-related trends – exploring both friendship and prior retweet connections as potential sensors. I then outline two systematic approaches for objectively quantifying the value of information acquired from selected sensor groups – a count-based approach and a predictive modeling framework. Ultimately, I find that both local and retweet sensors significantly reduce the noise of information produced by a random control group while effectively capturing over 80% of hashtags that become widely shared.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bending the ICT curve: Evaluating options to achieve 2030 sector-wide climate goals &amp; projecting new technology impacts</title>
<link href="https://hdl.handle.net/1721.1/151839" rel="alternate"/>
<author>
<name>Bell, Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/151839</id>
<updated>2023-08-24T03:34:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bending the ICT curve: Evaluating options to achieve 2030 sector-wide climate goals &amp; projecting new technology impacts
Bell, Allison
The global impact of the information and communications technology (ICT) sector is both growing and changing as computing technologies continue to develop and industry leaders make more efforts towards emissions reductions. Recent work highlights the increasing importance of manufacturing emissions in regards to the total impact of computing systems, but the tradeoff space in which decisions made to reduce emissions or energy in one part of a device lifecycle might increase emissions or energy demand in another remains largely unexplored. We evaluate several options for global impact reduction within the ICT sector, namely within data center (server) and smartphone footprints, focusing both on the maximal potential impact of each intervention and highlighting associated tradeoffs and limitations. We find that the ICT sector’s 2030 target of a 45% emissions reduction from 2020 levels is potentially achievable through the mechanisms proposed, including: renewable energy for operation, low-carbon electricity for manufacturing, extended device lifetimes, and the harnessing of energy efficiency improvements for impact reduction. In addition, we propose a method for evaluating the total carbon footprint benefits of a new computing technology through a detailed case study of a prototypical analog accelerator device. We provide an example of underspecified estimation of scaled device manufacturing impacts obtained through a reorganization of existing process emissions data. We then demonstrate the use of that estimate to evaluate the benefits of adoption of the new technology from the perspective of total footprint reduction under varying device usage conditions. Both our framework for estimating global ICT sector impact reduction strategies and our framework for assessing tradeoffs associated with new computing technology adoption are intended to serve as starting points for continued discussion and to align different, often siloed, stakeholders within the computing industry towards effectively “bending the curve” of ICT sector emissions growth.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing Urban Air Mobility Operations in a Corridor Network</title>
<link href="https://hdl.handle.net/1721.1/151838" rel="alternate"/>
<author>
<name>McDonald, Spencer T.</name>
</author>
<id>https://hdl.handle.net/1721.1/151838</id>
<updated>2023-08-24T03:36:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimizing Urban Air Mobility Operations in a Corridor Network
McDonald, Spencer T.
Electric vertical-takeoff-and-landing vehicles give rise to Urban Air Mobility (UAM) concepts. Initial UAM systems are projected to operate high volumes of flights within a capacitated corridor network. This paper develops a tractable methodology to optimize vehicle dispatching and routing in UAM networks, along with flight trajectories between origin and destination, as well as flow directionality in corridors. We formulate an integer optimization model in a time-space-network that exploits a subpath structure at the flight level. We propose a column generation algorithm to decompose vehicle dispatching decisions in a master problem and flight trajectories in a pricing problem, using a tailored backward label-setting algorithm. This methodology scales to practical instances, with up to 50 vertiports, hundreds of corridor conjunctions, and 1,000 trip requests. We develop a data-driven experimental setup capturing real-world travel demand, air traffic infrastructure and weather patterns. Results demonstrate the benefits of the comprehensive optimization approach developed in this paper, as compared to benchmarks that do not capture flow directionality or operating capacities. This methodology identifies the bottlenecks created by legacy corridors, horizontal separation requirements and adverse weather to inform the design of emerging UAM systems.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preventing Stern Tube Corrosion through Shipboard Cathodic Protection</title>
<link href="https://hdl.handle.net/1721.1/151835" rel="alternate"/>
<author>
<name>Bishop, Michael James</name>
</author>
<id>https://hdl.handle.net/1721.1/151835</id>
<updated>2023-08-24T03:26:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Preventing Stern Tube Corrosion through Shipboard Cathodic Protection
Bishop, Michael James
Cathodic protection extends the life of ships and decreases the cost of maintaining a ship fleet. Ships built with less noble metals, like steel, will corrode alarmingly and require protection. This work aims explicitly to present an analysis of the cathodic protection of a stern tube, a complex area to protect, and whether impressed current cathodic protection can aid a U.S. Coast Guard Fast Response Cutter with localized corrosion. Furthermore, this work presents multiple methods for studying the effectiveness of cathodic protection, including COMSOL Multiphysics, polarization experiments, and sacrificial anode wastage estimation. Lastly, nonintrusive load monitoring, with its diagnostic capabilities, provides an opportunity to advance the complicated field of study of corrosion protection. A nonintrusive load monitor (NILM) samples the voltage and current at the utility point and then computes real and reactive power, harmonic content, and system operating frequency. This work expands upon previous successes with NILM, namely, its ability to collect high-bandwidth data to generate an automatic log of the shipboard load operation. The record of energy consumption provided by a NILM gives designers in the seagoing services a continuously evolving picture of present and future power requirements, can shift some or all responsibility away from watchstanders, and provide data for corrosion research.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Detrainment and Settling of Sediment in Turbidity Currents: A Study to Inform Deep Seabed Mining</title>
<link href="https://hdl.handle.net/1721.1/151833" rel="alternate"/>
<author>
<name>Cathcart, Kelsey O'Brien</name>
</author>
<id>https://hdl.handle.net/1721.1/151833</id>
<updated>2023-08-24T03:41:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Detrainment and Settling of Sediment in Turbidity Currents: A Study to Inform Deep Seabed Mining
Cathcart, Kelsey O'Brien
Deep-sea mining for high demand minerals has recently been a topic of global conversation in relation to potential monetary value, geopolitical cooperation, and environmental impact.  The governing body of the deep-ocean, the International Seabed Authority (ISA), has simultaneously been attempting to protect the deep-sea from ruin by potentially harmful practices while also seeking to approve mining practices for mineral extraction from the deep-ocean for several years.  Understanding there is much to learn about not only the deep-ocean but how to build best practices for mining the deep-sea has led to several deep-ocean research projects to inform researchers, and in turn the ISA, on how to do so in a manner that leaves the smallest human footprint on this vast ecosystem.  Through both exploratory deep-ocean and laboratory research projects it has become apparent that the creation, and subsequent traveling, of turbidity currents across the seabed as a result of deep-sea mining will lead to impacts on a scale that is not yet entirely understood.   Building on decades of studies on gravity currents (both related and unrelated to deep-sea mining) the focus of this thesis and requisite experimentation focuses on not the head of the gravity current but rather the tail end of the current and observing the detrainment and settling that occurs after the gravity current has been created (or released).  The study of how these particles settle will inform the deep-sea mining field and the ISA on the potential environmental impact of this new practice and how to best move forward with potential deep-seabed exploitation following science informed practices and regulations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Forecasting Wind Direction Across Very Short and&#13;
Short Term Time Horizons for Wind Turbine Control</title>
<link href="https://hdl.handle.net/1721.1/151831" rel="alternate"/>
<author>
<name>Fiallo Van Eenenaam, Ana Cristina</name>
</author>
<id>https://hdl.handle.net/1721.1/151831</id>
<updated>2023-08-24T03:01:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Forecasting Wind Direction Across Very Short and&#13;
Short Term Time Horizons for Wind Turbine Control
Fiallo Van Eenenaam, Ana Cristina
Wind energy systems will require improved efficiency to meet future electricity demands in alignment with net-zero goals. Wind turbine yaw control systems can provide incremental improvements in power efficiency across both individual turbines and the collective wind farm. Yaw control strategies commonly apply low-pass őlters of wind directions observed by the turbine in the most recent 10 minutes to determine turbine reactions to the incident wind, yet the inability to anticipate future changes in wind direction lead to suboptimal power production. While recent literature has explored wind speed and power forecasting, methods speciőc to forecasting wind direction are studied less frequently. This thesis utilizes high resolution LiDAR data provided by the Woods Hole Oceanographic Institute Air-Sea Interaction Tower to assess the predictive performance of three methods for deterministic wind direction forecasting: persistence, autoregressive moving average (ARMA), and ridge regression. The models were tested on several time horizons (Δ&#119879;) that are relevant to turbine control and operation, ranging from 30 seconds to 2 hours. In this thesis, persistence demonstrated the highest predictive accuracy among the three models across all evaluated timescales. Generally, for Δ&#119879; &lt; 5 minutes, ARMA was next in best performance and outperformed ridge regression. For Δ&#119879; &gt; 5 minutes, ridge regression outperformed ARMA forecasting but was still worse than persistence. Lastly, a comparison of model forecasting performance across several elevations demonstrated inconsistent results across the testing frameworks employed in this thesis, suggesting that future work should continue evaluating models’ performance across heights. Future work should also further develop deterministic and stochastic, data-driven strategies for wind direction forecasting across short term time horizons, as well as assess their impact on individual turbine and farm power efficiency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Generative Structural Design: An Algorithmic Approach to Synthesizing and Optimizing Steel Lateral Systems</title>
<link href="https://hdl.handle.net/1721.1/151830" rel="alternate"/>
<author>
<name>Hirt, Natasha K.(Natasha Karolina)</name>
</author>
<id>https://hdl.handle.net/1721.1/151830</id>
<updated>2025-12-02T19:27:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Generative Structural Design: An Algorithmic Approach to Synthesizing and Optimizing Steel Lateral Systems
Hirt, Natasha K.(Natasha Karolina)
Mitigating the immense environmental impact of the built environment is an important objective for the architecture, engineering, and construction industries. As initial decisions around layout and configuration have significant effects on the structural efficiency of buildings and are difficult to revise later in the design process, it is essential to provide designers with accurate material quantity and embodied carbon estimates at early design stages. The diversity of architectural expression and complexity of structural calculation has made it challenging to develop a tool that is sufficiently accurate, adaptive, and automated to accomplish this goal.&#13;
&#13;
This thesis presents a methodological and an analytical contribution. A novel generative structural design method is proposed, taking low-fidelity inputs, such as those that might be considered during early-stage design, and outputting a high-fidelity structural model that can be analyzed and iterated. The algorithm is tested on 233 structures drawn from wild and synthetic datasets, and a comparative analysis performed between five lateral system typologies. The findings correspond with the literature, verifying the premium for height proposed by Khan as well as Samyn’s slenderness premium.&#13;
&#13;
The analysis demonstrates the utility of synthetic structural system design for individual building analysis and generates new knowledge about the relative efficiencies of different lateral system typologies at a range of heights. The method evaluates how computational tools, such as design space visualization and topology optimization, may be realistically integrated into generative algorithms. Finally, the rich data produced with generative structural design reveals new ways to visualize, analyze, and understand the ways in which designers’ choices affect the ultimate efficiency and environmental impact of built structures.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Optimized Perimeter Steel Bracing of Tall Buildings under Different Seismic Regions</title>
<link href="https://hdl.handle.net/1721.1/151829" rel="alternate"/>
<author>
<name>Medina, Chelsea Karina</name>
</author>
<id>https://hdl.handle.net/1721.1/151829</id>
<updated>2023-08-24T03:24:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Comparing Optimized Perimeter Steel Bracing of Tall Buildings under Different Seismic Regions
Medina, Chelsea Karina
There are challenges associated with building high-rise structures sustainably and safely, especially in seismic regions. These types of structures face extreme loading conditions. One promising solution for these challenges is topology optimization, which involves determining the optimal material distribution to achieve desired performance criteria under certain constraints. However, implementing topology optimization for real-life structures under seismic design codes is challenging due to multiple nonlinear constraints, discrete variables, and high computational cost. Recently, there have been several attempts to use topology optimization for seismic design. Considered groundbreaking in this regard is research proposed by Amory Martin in 2020. This author’s work proposed a method called the sum of modal compliances to optimize a steel lateral frame system in tall buildings for seismic design. The focus of this work is to expand upon this method, generating lateral frame systems for tall buildings from response spectra in different seismic regions rather than from an idealized design spectrum. The structural performance of the various optimized framing layouts produced were further verified through a nonlinear analysis, which indicated that they had the potential to outperform traditional bracing systems under seismic excitation. This was a trend observed in multiple seismic regions in North America. This research has important implications as the use of topology optimization in designing lateral brace frames for tall buildings under seismic excitation could help develop safer and more sustainable structures, reducing embodied carbon while maximizing construction revenue.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Comparing Phylogenetic and Deep Learning Methods to Predict Seed Dispersal Mode</title>
<link href="https://hdl.handle.net/1721.1/151828" rel="alternate"/>
<author>
<name>Xu, Haodi</name>
</author>
<id>https://hdl.handle.net/1721.1/151828</id>
<updated>2023-08-24T03:45:43Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Comparing Phylogenetic and Deep Learning Methods to Predict Seed Dispersal Mode
Xu, Haodi
Increasing tree cover is a promising natural climate solution to reduce carbon under the pressing global warming. Seed dispersal is a key process in natural forest regrowth, where seeds are moved away from parent plants to establish new growth. Dispersal modes include biotic and abiotic methods, and vary depending on traits such as seed shape, size, and color. However, globally, data on seed dispersal modes of plant species is limited, hindering our understanding of the importance of wild animals in increasing tree cover and their role in carbon sequestration. The research goal of this study is to find a method to predict unknown seed dispersal modes with high accuracy by comparing a novel deep learning method with a typical phylogenetic imputation method. Here we show that the phylogenetic imputation method performed better than deep learning methods in predicting biotic seed dispersal mode. However, we also found that the deep learning methods demonstrate great potential in learning from community science photographs, despite their underperformance in this study. Furthermore, the study shows that incorporating a feature-extraction model could improve predictions of a single CNN model, highlighting the potential for future studies to include more models for better predictions of seed dispersal modes. We anticipate that the problems and potential improvements identified in this study relating to the deep learning method will serve as a starting point for further model development to predict the seed dispersal mode of unknown species with greater accuracy. This could involve applying multiple models, incorporating phylogenetic information with deep learning models, and including additional features. Accurately understanding how different plant species are dispersed can help scientists better predict future forest dynamics and carbon storage capacity, which is critical for studying future climate change and developing effective climate change mitigation strategies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Consistent Estimators for Learning to Defer to an Expert</title>
<link href="https://hdl.handle.net/1721.1/151827" rel="alternate"/>
<author>
<name>Mozannar, Hussein</name>
</author>
<id>https://hdl.handle.net/1721.1/151827</id>
<updated>2023-08-24T03:49:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Consistent Estimators for Learning to Defer to an Expert
Mozannar, Hussein
Learning algorithms are often used in conjunction with expert decision makers in practical scenarios, however, this fact is largely ignored when designing these algorithms. In this thesis, we explore how to learn predictors that can either predict or choose to defer the decision to a downstream expert. Given only samples of the expert's decisions, we give a procedure based on learning a classifier and a rejector and analyze it theoretically. Our approach is based on a novel reduction to cost sensitive learning where we give a  consistent surrogate loss for cost sensitive learning that generalizes the cross entropy loss. We show the effectiveness of our approach on a variety of experimental tasks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integration and Implementation of Conceptual Design Tools for Naval Warships</title>
<link href="https://hdl.handle.net/1721.1/151826" rel="alternate"/>
<author>
<name>Cathcart IV, John Harris</name>
</author>
<id>https://hdl.handle.net/1721.1/151826</id>
<updated>2023-08-24T03:59:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Integration and Implementation of Conceptual Design Tools for Naval Warships
Cathcart IV, John Harris
The Naval Construction and Engineering Program (2N) has relied on the Advanced Ship and Submarine Evaluation Tool (ASSET) as the primary tool for completing concept design projects for naval warships. ASSET is no longer supported by the U.S. Navy, and the Naval Concepts Requirements and Exploration (C&amp;RE) tool has been identified as a feasible replacement. Incorporating the C&amp;RE tool into the 2N program is part of new collaborative naval architecture research between Virginia Tech and MIT that is further supported by Naval leadership at NAVSEA, Naval Surface Warfare Center Carderock, Naval Surface Warfare Center Dahlgren, and others. The C&amp;RE tool has been converted for further use in MIT’s 2N program and is now available to all students for future warship design projects. Furthermore, a novel design tool is introduced that is capable of assisting naval architects to accurately and efficiently complete the preliminary arrangements of vital engineering and combat systems vital components. A case study for a new medium-sized surface combatant is conducted as a validation of both the C&amp;RE tool for 2N use and the application of the preliminary arrangements tool.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design, construction and testing of a pumped molten chloride circulation loop</title>
<link href="https://hdl.handle.net/1721.1/151825" rel="alternate"/>
<author>
<name>Bichnevicius, Michael</name>
</author>
<id>https://hdl.handle.net/1721.1/151825</id>
<updated>2023-08-24T03:43:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design, construction and testing of a pumped molten chloride circulation loop
Bichnevicius, Michael
Next generation concentrating solar power (CSP) with thermal energy storage is envisioned to operate with a peak temperature of 700 °C or higher, but transitioning from a stainless steel to a nickel alloy infrastructure is prohibitively expensive. In light of this challenge, the present work investigates the use of refractory materials instead of nickel alloys. This thesis presents the design, construction and testing of a laboratory-scale pumped circulation loop made of refractory materials to circulate molten chloride salt above 700 °C. The components of the loop were initially constructed from conventional refractory materials such as graphite, carbon-carbon composite, molybdenum, and alumina, though the loop was subsequently adapted to test a laboratory-scale tank and pipe made of a novel calcium hexaluminate-based castable refractory designed to resist corrosion and penetration by molten chloride salt. In addition, this thesis describes the design and operation of a convection-enhanced rotating disk corrosion test apparatus to study the corrosion behavior of refractory materials in molten chloride salt under flowing conditions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Preliminary Shipboard Layout of Navy Integrated Power and Energy Corridor (NiPEC)</title>
<link href="https://hdl.handle.net/1721.1/151824" rel="alternate"/>
<author>
<name>Kruse, Matthew Thomas</name>
</author>
<id>https://hdl.handle.net/1721.1/151824</id>
<updated>2023-08-24T03:49:52Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Preliminary Shipboard Layout of Navy Integrated Power and Energy Corridor (NiPEC)
Kruse, Matthew Thomas
Naval ship systems increasingly require more electricity. The Zumwalt class destroyer was the Navy’s first modern fully electric ship. Through its integrated power system, the prime movers provide electric power to meet propulsion, ship service, offensive, and defensive systems requirements. The next generation destroyer, DDG(X) is also planned to be an electric ship. The ships of the future can thus be anticipated to employ upwards of 100 Megawatt (MW) or more electric power. With such a rise in electrical power comes the requirement to move electricity efficiently over compact and reliable power distribution systems.&#13;
&#13;
To increase a ship’s electrical infrastructure density, MIT is developing a new electrical power distribution structure called the Navy Integrated Power and Energy Corridor (NiPEC). The distribution cables, load centers, power panels, and power conditioners are all co-located into the NiPEC [1]. This allows for electrical energy to be efficiently routed through the ship and increase electrical redundancy. Individual NiPEC sections will fit into reserve-space ship locations and may use the new Navy Integrated Power Electronics Building Block (iPEBB) to control and condition power. The NiPEC will include space to accommodate future power requirements with little refit needed to the ship or the power corridor.&#13;
&#13;
This thesis used a notional ship developed by Electric Ship Research and Development Consortium (ESRDC), past research into NiPEC electrical components, open source military specifications, and open source literature to build a power corridor concept 3D model within a single ship compartment. As this is the first 3D model concept, all components were based on existing technology to establish a benchmark of size and power conversion density. Once a single power corridor compartment was modeled, the components were duplicated throughout the notional ship. The 3D concept includes major power corridor elements with attention given to ease of construction, maintenance, and repair.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Exploring the Role of Hydrogen in Decarbonizing Heavy Industry</title>
<link href="https://hdl.handle.net/1721.1/151820" rel="alternate"/>
<author>
<name>Benavides, Kali</name>
</author>
<id>https://hdl.handle.net/1721.1/151820</id>
<updated>2023-08-24T03:39:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Exploring the Role of Hydrogen in Decarbonizing Heavy Industry
Benavides, Kali
Hydrogen is increasingly being seized upon as a widespread decarbonization solution. There are a number of potential applications for hydrogen and investments are being funneled into demonstration projects. In this thesis work I explore the economic competitiveness of hydrogen in two heavy industry applications; steelmaking and high temperature heating. These processes rely on fossil fuels for multiple attributes and there is not another low carbon alternative fuel that has all of these characteristics. I find that in all regions, low carbon hydrogen production costs are currently more expensive than fossil fuels. High temperature heating with hydrogen increases the cost of clinker by 58-225%, and raw glass by 16-73%. Applications of hydrogen in steelmaking increase steel costs by 24-90%. Cost ranges represent the different costs when using Blue or Green &#119867;2. As a competing low carbon steel production pathway, I also assessed steelmaking with CCS which increased steelmaking costs by (∼14%). Using the MIT Economic Projection and Policy Analysis (EPPA) model, I examined the deployment of &#119867;₂ based steelmaking and steelmaking with CCS under a deep decarbonization policy scenario. Results show that at current costs deployment is limited prior to 2050. However, if costs are reduced then these technologies can deploy rapidly (achieving up to 100% of the share of global steel production by 2050). Adoption of decarbonization technologies is regionally specific and there can be regional advantages to deploying certain production pathways.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How to Go Greene: The Complex Dynamics of the Ongoing Transition in Southwestern Pennsylvania</title>
<link href="https://hdl.handle.net/1721.1/151819" rel="alternate"/>
<author>
<name>He, Yiran</name>
</author>
<id>https://hdl.handle.net/1721.1/151819</id>
<updated>2023-08-24T03:26:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How to Go Greene: The Complex Dynamics of the Ongoing Transition in Southwestern Pennsylvania
He, Yiran
Coal mining is a huge part of the culture and identity in Greene County. Unprompted, when asked about their personal histories and backgrounds, people will mention family and ancestors who were coal miners, as a way of demonstrating their deep ties to the region. Now, the tax structure is such that a lot of public infrastructures are suffering from the loss in value of coal assets.&#13;
&#13;
Those former heights of prosperity are diminishing. The oil and gas industry has not stepped up to replace the loss in tax revenue intake, nor the loss in jobs, nor has the industry provided funds for remediation of the lands it has damaged.&#13;
&#13;
The residents of Greene County are trying to forge a way to bring the community to a place where everyone who considers it home can stay, and build their home for their kids. From economic diversification, to environmental effects of the fossil industry, to workforce issues, to tax base considerations, to long-term education and planning and housing, every issue is at its core about what it means for Greene County to feel like home.&#13;
&#13;
Not everyone agrees on how to move forward. Some believe in a model where a large investment sparks other service industries to move in and build out the economic base. Others believe in a more grassroots, endogenous model, where residents build their own way out. I hear tensions between those who believe in the benefits fracking has brought to some residents through royalties, and those concerned about lack of fresh food and clean water.&#13;
&#13;
Listening to and uplifting the stories of the community members is one way well-resourced institutions can begin to meaningfully engage with and contribute to local partners, through building trust and relationships. Looking to the future, I hope MIT and others can play supportive, collaborative roles, helping to build capacity and to empower communities to chart their own development.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Equity and Affordability Impacts of Building Performance Standards: A Case Study of New York City’s Local Law 97</title>
<link href="https://hdl.handle.net/1721.1/151817" rel="alternate"/>
<author>
<name>Shepard, Allison</name>
</author>
<id>https://hdl.handle.net/1721.1/151817</id>
<updated>2023-08-24T03:47:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Equity and Affordability Impacts of Building Performance Standards: A Case Study of New York City’s Local Law 97
Shepard, Allison
Building operations account for more than two-thirds of emissions in most U.S. cities. To reduce emissions from the building sector, cities and states are adopting Building Performance Standards. These laws require large buildings to comply with increasingly stringent emissions or energy limits, or otherwise pay a fine. Building Performance Standards are powerful decarbonization tools, but may inadvertently over-burden and increase the risk of displacement for low-income households. The risk is particularly acute in unsubsidized affordable housing, also known as naturally occurring affordable housing, where building owners can pass the costs of compliance or non-compliance on to tenants. In this thesis, I quantitatively and qualitatively examine the impact of New York City’s Building Performance Standard – Local Law 97 – on multifamily buildings. I find affordable housing buildings are less energy efficient than market rate buildings and that non-compliance penalties, retrofit costs, and increased energy costs from electrification may substantially increase housing costs for tenants who are already severely rent burdened and energy burdened. To prevent low-income households from shouldering these costs and to ensure they benefit from the other results of building decarbonization – such as health improvement and job creation – cities and states should provide financial and technical assistance, protect tenants, and incorporate flexibility for affordable housing owners to comply with Building Performance Standards.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Economic Advantage of Computer Vision Over Human Labor, and Its Market Implications</title>
<link href="https://hdl.handle.net/1721.1/151816" rel="alternate"/>
<author>
<name>Svanberg, Maja S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151816</id>
<updated>2023-08-24T03:02:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Economic Advantage of Computer Vision Over Human Labor, and Its Market Implications
Svanberg, Maja S.
With the emergence of Artificial Intelligence (A.I.), our lives and economy are under- going a profound transformation. While there are huge benefits to be realized by the technology, we must also prepare for shifting circumstances, including changes in mar- ket dynamics and the labor market. Thus, to inform policy, we need to understand and forecast the implementation of A.I.&#13;
&#13;
Previous forecasts of A.I. proliferation have focused on the technical feasibility of replacing human labor in existing tasks. However, since the decision to deploy a technology is ultimately an economic one, I develop a framework that compares the cost of A.I. to the cost of worker compensation. As such, this approach considers not only technical feasibility, but also the economic advantage of A.I. over human labor.&#13;
&#13;
Using the framework, I examine the case of Computer Vision in the U.S. non- farm economy, drawing on previous work on the cost of Computer Vision, as well as government data on wages, tasks, and the size of firms. The results suggest that while Computer Vision can replace human labor across sectors and industries, it will only have an economic advantage over human labor in the very largest enterprises. In smaller companies, the sum of task-specific employee compensation does not exceed system development costs. Data is identified as the main driver of total Computer Vision development costs, placing incumbent firms at an advantage in the race to realize the economies of scale that Computer Vision, and A.I. in general, enable.&#13;
&#13;
Based on my findings and related work on labor markets, I argue that automation is not the only way in which the introduction of A.I. could harm workers. Increased market concentration, stemming from access to data being restricted to firms with existing operations as well as enhanced production efficiency, might cause a systemic power shift from workers to firms. I point to the facilitation of industry data-sharing as a tool for policy-makers to mitigate these effects by lowering the barriers to entry into A.I.-centric markets.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>High-Precision Stress Measurement in Thin Films for X-Ray Mirrors</title>
<link href="https://hdl.handle.net/1721.1/151815" rel="alternate"/>
<author>
<name>Whalen, Mallory</name>
</author>
<id>https://hdl.handle.net/1721.1/151815</id>
<updated>2023-08-24T03:02:19Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">High-Precision Stress Measurement in Thin Films for X-Ray Mirrors
Whalen, Mallory
Future X-ray observatories aim to achieve sub-arcsecond angular resolution with unprecedented sensitivity. Silicon meta-shell optics technology will enable the X-ray astronomy instrumentation community to create such an observatory. The light-weighted silicon mirrors used in meta-shell optics have a low stiffness which makes them susceptible to deformations caused by stress in their reflective coatings. Much research has been dedicated to figuring coated mirrors that have been deformed by their coatings and in creating low stress coatings. These coatings need to be stable over decades for the length of the observatory's mission. However, the stress stability of candidate X-ray reflective coatings has not been measured or proven to be small enough as to not re-deform the mirrors after they have been corrected. &#13;
&#13;
Membrane resonance techniques have been used to study thin film stress evolution during deposition. It has a superior sensitivity as compared to other techniques, such as substrate curvature methods. A novel device that uses the membrane resonance technique to repeatably measure stress in thin films is described. Sources of non-repeatability are discussed and repeatability studies are performed. The results presented in this thesis suggest that the membrane resonance technique is suitable for use in measuring X-ray reflective coating stress stability to the minute levels required for future X-ray observatories.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Koopman-Based Reduced-Order State Observer&#13;
for Visual Localization of Robots</title>
<link href="https://hdl.handle.net/1721.1/151814" rel="alternate"/>
<author>
<name>Williams, Jadal</name>
</author>
<id>https://hdl.handle.net/1721.1/151814</id>
<updated>2023-08-24T03:53:12Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Koopman-Based Reduced-Order State Observer&#13;
for Visual Localization of Robots
Williams, Jadal
A reduced-order observer using Koopman lifting linearization is developed for localization of a robot guided by a vision system. The Koopman operator is a powerful method for representing nonlinear robot dynamics as a linear model in a lifted space. Koopman faces two main challenges with robot localization. One is that the lifted linear system is not observable in general; standard Kalman filter and state observers cannot be applied to such non-observable systems. The other is that a large number of observables are required for accurate linearization. Here, we present 1) a new reduced-order state observer for a Koopman lifted linear model that satisfies the observability conditions, and 2) measurement of the multitude of Koopman observables by extracting many features from a camera image. These image features used as Koopman observables are directly measured in real-time and, thereby, make the observability matrix of the reduced-order state observer full rank. The method is developed for a robot crane system equipped with a vision system. We can estimate the endpoint of the robot using a reduced-order state observer of a lifted linear model where 20 observables are obtained from a visual image.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Developing a Data-Driven Strategy for In-Process Quality Assurance for Additive Manufacturing</title>
<link href="https://hdl.handle.net/1721.1/151811" rel="alternate"/>
<author>
<name>Ibrahim, Mariam Elisabeth</name>
</author>
<id>https://hdl.handle.net/1721.1/151811</id>
<updated>2023-08-24T03:23:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Developing a Data-Driven Strategy for In-Process Quality Assurance for Additive Manufacturing
Ibrahim, Mariam Elisabeth
Additive manufacturing has transformed production by introducing a digital approach to manufacturing. Modern additive machinery consists of sensors that provide real- time data on environmental conditions; as a result, significantly more quantitative information is available for a manufacturing process. The applications for sensor-based data are numerous, especially when considered in tandem with information from across the entire process flow. This thesis examines the use of three main types of data in the additive process - feedstock age, environmental conditions, and furnace dynamics - to predict three specific quality outcomes (chemistry, porosity, and solid density) in medical implants at Stryker. By way of a series of predictive models, two main results are achieved for each quality test. First, input variable importance is quantified, enabling a deeper understanding of the significance of each leveraged data set in predicting quality. Second, models are designed to enable a double-digit percent reduction in testing volumes, enabling cost savings and increases in operational efficiencies. Quantifying variable significance enables future work to focus on improving predictions by investing in the quality of specific data sets. More broadly, the findings serve as a proof-of-concept for the impact of leveraging modern data science in additive manufacturing. While this work focuses on a single product line, the methodology can scale. In particular, gains may be far greater in industries that have higher failure-rate tolerances as a result of fewer issues with class imbalances in modeling.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Accelerating topology optimization codes using mesh refinement continuation</title>
<link href="https://hdl.handle.net/1721.1/151810" rel="alternate"/>
<author>
<name>Chen, Austin</name>
</author>
<id>https://hdl.handle.net/1721.1/151810</id>
<updated>2023-08-24T03:37:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Accelerating topology optimization codes using mesh refinement continuation
Chen, Austin
A new concept for an algorithm accelerating topology optimization programs is introduced and explained in detail, which involves a continuation of increasing mesh resolutions to achieve low-compliance solutions with largely reduced computation times. Comparisons with examples from relevant literature show speedups of up to approximately 60% on discretizations up to the order of 106 elements for common benchmark problems. Improvements in speed can be attributed to taking advantage of running code on coarse meshes as a faster way to generate smart initial guesses to be reused as inputs for subsequent runs on finer meshes. A MATLAB script for the new algorithm and associated modifications to existing topology optimization code is included.&#13;
&#13;
Keywords: Topology Optimization, Mesh Refinement, Computational Efficiency, MATLAB
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Benchmarking Residential Electricity Consumption: A Utility’s Machine Learning Approach to Smart Metering Data&#13;
and the European Energy Crisis</title>
<link href="https://hdl.handle.net/1721.1/151807" rel="alternate"/>
<author>
<name>Canaan, Alexa Reese</name>
</author>
<id>https://hdl.handle.net/1721.1/151807</id>
<updated>2023-08-24T03:13:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Benchmarking Residential Electricity Consumption: A Utility’s Machine Learning Approach to Smart Metering Data&#13;
and the European Energy Crisis
Canaan, Alexa Reese
The European Energy Crisis is putting increasing pressure on the global energy supply and the residential sector is a key sector with variable consumption patterns that accounts for 40% of global energy consumption and residential buildings accounting for 27% of global energy consumption [32]. We use utility smart metering data at the hourly energy consumption level and daily peak consumption level from a subset of Iberdrola’s Spanish residential customers. Critically, we develop a model for utilities hoping to analyze smart metering data effectively. We test several different clustering methods and analyze energy consumption at different levels of granularity to identify the best benchmarking practices at all levels. We hypothesize that time, weather, and household characteristics are significant factors to identify energy consumption for a household and that outlier observations of energy consumption highlight opportunities to conserve more energy, a novel approach, critically not using any personal identifiable information. We also perform residual analysis to identify households that are most sensitive to changes in temperature. This creates a strong foundation for demand-response with customers. As Europe heads towards a long-term energy crisis, it is crucial that utilities have a framework to follow for their analysis before performing interventions with customers. Further potential uses for this methodology at the governmental, utility, and local/individual levels are also included at the end to motivate potential case studies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantifying the Financial Value of Building Decarbonization Technology: Case Studies on New Construction and Retrofitting in the Face of Uncertainty</title>
<link href="https://hdl.handle.net/1721.1/151806" rel="alternate"/>
<author>
<name>Valdez Echeverria, Alejandro</name>
</author>
<id>https://hdl.handle.net/1721.1/151806</id>
<updated>2023-08-24T03:25:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Quantifying the Financial Value of Building Decarbonization Technology: Case Studies on New Construction and Retrofitting in the Face of Uncertainty
Valdez Echeverria, Alejandro
The built environment accounts for approximately 40% of global emissions. As a result, property owners face increasing pressure from regulators, investors, and tenants to reduce greenhouse gas emissions. However, building decarbonization requires costly investments that may or may not be recouped over a multi-decade horizon. The quantification of these financial returns is complicated by uncertainty in capital expenses, energy cost savings, emission regulations, and real estate market conditions. Against this background, building developers need a method to quantify the financial value of decarbonization under a variety of future uncertainties. This thesis develops an integrated framework that combines building energy modelling with real estate investment analysis to assess the energy-saving and financial impact associated with the adoption of decarbonizing technologies. To incorporate future uncertainties, the framework employs Monte Carlo techniques to simulate 10,000 different future scenarios of energy prices, real estate market conditions, energy performance, regulatory environments and grid decarbonization rates (which affect the emissions of a building.) We apply this framework in two case studies: (1) a new construction of an office building in NYC, and (2) an energy retrofit of an existing multifamily building in New Jersey. In the first case study, our simulations indicate that, in approximately 76 percent of scenarios, the most profitable decision for the building owner is to adopt a natural gas-powered heating system. However, adopting a building design that provides a building the flexibility to fully electrify at a later date is more profitable than a natural gas-heating building in 99 percent of scenarios. In the second case study, we evaluate 64 retrofit packages, and present a list of the top 30 retrofit solutions that maximize NPV, energy use reduction, and carbon emission reductions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Some tools for event frequency decomposition and heterogeneous transfer line analysis</title>
<link href="https://hdl.handle.net/1721.1/151790" rel="alternate"/>
<author>
<name>Giancola, Augusto Rafael.</name>
</author>
<id>https://hdl.handle.net/1721.1/151790</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1995-01-01T00:00:00Z</published>
<summary type="text">Some tools for event frequency decomposition and heterogeneous transfer line analysis
Giancola, Augusto Rafael.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1995; Includes bibliographical references (leaf 137).
</summary>
<dc:date>1995-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Membrane materials for a nonthrombogenic blood oxygenator,</title>
<link href="https://hdl.handle.net/1721.1/151786" rel="alternate"/>
<author>
<name>Weathersby, Paul Kirby.</name>
</author>
<id>https://hdl.handle.net/1721.1/151786</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Membrane materials for a nonthrombogenic blood oxygenator,
Weathersby, Paul Kirby.
Thesis: M.S., Massachusetts Institute of Technology, Department of Chemical Engineering, 1972; Bibliography: leaves 66-72.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of the generalized Boltzmann equation.</title>
<link href="https://hdl.handle.net/1721.1/151782" rel="alternate"/>
<author>
<name>Wei, Thomas Ying Chung.</name>
</author>
<id>https://hdl.handle.net/1721.1/151782</id>
<updated>2025-10-30T17:51:28Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Analysis of the generalized Boltzmann equation.
Wei, Thomas Ying Chung.
Thesis: M.S., Massachusetts Institute of Technology, Department of Nuclear Engineering, 1972; Includes bibliographical references.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Effects of exhaust gas recirculation on exhaust nitric oxide concentrations, cycle-to-cycle variations, and flame speed in an S.I. engine.</title>
<link href="https://hdl.handle.net/1721.1/151781" rel="alternate"/>
<author>
<name>Komiyama, Kunihiko.</name>
</author>
<id>https://hdl.handle.net/1721.1/151781</id>
<updated>2025-10-30T17:03:43Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Effects of exhaust gas recirculation on exhaust nitric oxide concentrations, cycle-to-cycle variations, and flame speed in an S.I. engine.
Komiyama, Kunihiko.
Thesis: M.S., Massachusetts Institute of Technology, Department of Mechanical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimization of the merged transistor logic gate.</title>
<link href="https://hdl.handle.net/1721.1/151780" rel="alternate"/>
<author>
<name>Kling, Gary William.</name>
</author>
<id>https://hdl.handle.net/1721.1/151780</id>
<updated>2025-10-30T15:50:03Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Optimization of the merged transistor logic gate.
Kling, Gary William.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Feasibility study of a satellite-to-satellite data relay network.</title>
<link href="https://hdl.handle.net/1721.1/151779" rel="alternate"/>
<author>
<name>Eastwood, Lester Francis.</name>
</author>
<id>https://hdl.handle.net/1721.1/151779</id>
<updated>2025-10-30T17:03:44Z</updated>
<published>1970-01-01T00:00:00Z</published>
<summary type="text">Feasibility study of a satellite-to-satellite data relay network.
Eastwood, Lester Francis.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1970; Bibliography: leaf 76.
</summary>
<dc:date>1970-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance evaluation feedback in the United States Marine Corps, a critical analysis.</title>
<link href="https://hdl.handle.net/1721.1/151778" rel="alternate"/>
<author>
<name>Knowles, Robert Clement.</name>
</author>
<id>https://hdl.handle.net/1721.1/151778</id>
<updated>2025-10-30T15:50:05Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Performance evaluation feedback in the United States Marine Corps, a critical analysis.
Knowles, Robert Clement.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1973; Includes bibliographical references.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Correlated malfunctions in redundant systems.</title>
<link href="https://hdl.handle.net/1721.1/151742" rel="alternate"/>
<author>
<name>Weinstein, William Winiker.</name>
</author>
<id>https://hdl.handle.net/1721.1/151742</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Correlated malfunctions in redundant systems.
Weinstein, William Winiker.
Thesis: M.S., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 1972; Bibliography: leaves 88-89.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An objective measure for the loudness of impulsive noises.</title>
<link href="https://hdl.handle.net/1721.1/151737" rel="alternate"/>
<author>
<name>Weekly, Gordon David.</name>
</author>
<id>https://hdl.handle.net/1721.1/151737</id>
<updated>2025-10-30T15:50:01Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">An objective measure for the loudness of impulsive noises.
Weekly, Gordon David.
Thesis: M.S., Massachusetts Institute of Technology, Department of Electrical Engineering, 1972; Bibliography: leaves 96-97.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Mixed international joint ventures in the exploration, development and production of petroleum.</title>
<link href="https://hdl.handle.net/1721.1/151736" rel="alternate"/>
<author>
<name>Warner, Eldon Irwin Gerard.</name>
</author>
<id>https://hdl.handle.net/1721.1/151736</id>
<updated>2025-10-30T17:51:25Z</updated>
<published>1972-01-01T00:00:00Z</published>
<summary type="text">Mixed international joint ventures in the exploration, development and production of petroleum.
Warner, Eldon Irwin Gerard.
Thesis: M.S., Massachusetts Institute of Technology, Sloan School of Management, 1972; Lacking leaf 105. Leaf 7.1 inserted between Leaves 7 and 8.; Bibliography: leaves 79-82.
</summary>
<dc:date>1972-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Altered retinal connections following partial tectum lesions in neonate hamsters.</title>
<link href="https://hdl.handle.net/1721.1/151735" rel="alternate"/>
<author>
<name>Jhaveri, Sonal Ramniklal.</name>
</author>
<id>https://hdl.handle.net/1721.1/151735</id>
<updated>2025-10-30T18:06:07Z</updated>
<published>1973-01-01T00:00:00Z</published>
<summary type="text">Altered retinal connections following partial tectum lesions in neonate hamsters.
Jhaveri, Sonal Ramniklal.
Thesis: M.S., Massachusetts Institute of Technology, Department of Psychology, 1973; Vita.; Bibliography: leaves 67-73.
</summary>
<dc:date>1973-01-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis of Respiratory Time Series Data for Breathing Discomfort Detection Prior to Sleep Onset During APAP Therapy</title>
<link href="https://hdl.handle.net/1721.1/151711" rel="alternate"/>
<author>
<name>Unger, Shelby</name>
</author>
<id>https://hdl.handle.net/1721.1/151711</id>
<updated>2023-08-01T03:39:21Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis of Respiratory Time Series Data for Breathing Discomfort Detection Prior to Sleep Onset During APAP Therapy
Unger, Shelby
Discomfort during treatment continues to be a major barrier to adherence to positive airway pressure (PAP) therapy. Thus, a key pillar of ResMed’s business strategy is to deliver intelligent tools that assist healthcare providers in identifying which patients may be struggling with therapy, and why, to enable more effective interventions and personalized patient education. One potential cause of discomfort is perceived stuffiness from pressure levels that is lower than tolerable for some patient preferences. This thesis seeks to explore which patterns in the high-resolution breathing data from ResMed devices may be used to identify patients who are experiencing breathing discomfort at low pressures at the beginning of their therapy sessions. Specifically, time-series clustering is performed on sequential respiratory data to identify groups of patients with similar breathing patterns. The independence between clusters and variables pertaining to patients’ demographic characteristics, therapy settings, usage habits, respiratory characteristics, and self-reported comfort levels are evaluated via statistical testing. Based on the results, features in breathing data are identified that may be meaningful indicators for whether a patient is experiencing discomfort or breathlessness. Additionally, opportunities for additional data collection that would enable further analysis and more accurate modelling are discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Industry 4.0 in Biomanufacturing: Predictive Real-Time Models Using Process Analytical Technology</title>
<link href="https://hdl.handle.net/1721.1/151710" rel="alternate"/>
<author>
<name>Murr, Michaela</name>
</author>
<id>https://hdl.handle.net/1721.1/151710</id>
<updated>2023-08-01T04:13:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Industry 4.0 in Biomanufacturing: Predictive Real-Time Models Using Process Analytical Technology
Murr, Michaela
In biomanufacturing, process analytical technology (PAT) has become an essential tool for improving product quality, reducing costs, and increasing efficiency. In this thesis, we collect capacitance and optical density data from two in-line sensors in production bioreactors to compute real-time readings of viable cell density (VCD) and viability, two critical metrics that drive product quality and batch yield. Comparing predictions with manually collected samples, a Gaussian Process Regressor model with Matern Kernel (nu=0.5) is found to be optimal, achieving an MAPE of 7.46%, well within 10% error as defined by Amgen process development scientists. We then utilize this VCD model in conjunction with the optical density probe, which measures total cell density (TCD), in a novel way to obtain real-time measurements of viability within 5% of offline measurements conducted using a cell counter. Our results demonstrate the effectiveness of using real-time sensor data and ML models for monitoring critical quality attributes in biomanufacturing. This will enable an estimated $2M per year in savings in avoidable product losses in Amgen’s new manufacturing plant in North Carolina, approximately 50% reduction in manual sampling efforts, and offer further process improvement opportunities, particularly for advanced process control. This use case demonstrates the potential of PAT for improving biomanufacturing processes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Model-Based Technology Roadmapping of Sustainable Aviation Technologies</title>
<link href="https://hdl.handle.net/1721.1/151709" rel="alternate"/>
<author>
<name>Liu, Lisa</name>
</author>
<id>https://hdl.handle.net/1721.1/151709</id>
<updated>2023-08-01T04:10:51Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Model-Based Technology Roadmapping of Sustainable Aviation Technologies
Liu, Lisa
The development of sustainable aviation products, and any products with long-term development cycles, requires the projection of future performance of nascent technologies. Technology roadmapping is a maturing field used by many firms to develop strategy and projects aimed at meeting certain market needs. However, tying together technology improvement rates of key figures of merit with performance of a system including said technology is challenging. Data on technology improvement rates are time-consuming to collect and difficult to decompose into improvement of specific aspects of a technology. Here, using low-temperature polymer electrolyte membrane hydrogen fuel cells as a pilot technology, we demonstrate a method for evaluating technology improvement which utilizes improvement rates mined from patent data in a first principles-based model. We demonstrate the ability to predict a similar technology improvement rate from this bottoms-up approach as is yielded through other methods in the literature and the ability to pinpoint the system parameters that will drive improvement. We discuss the drawbacks of using fuel cells in an aircraft and the organizational considerations required to adopt a broad shift in technology roadmapping approaches in a large firm. This approach reduces the time needed to gather technology improvement rate data from weeks to minutes and links the sensitivity of system performance to improvement rate. In combination with existing approaches to evaluating technology improvement from a top-down perspective, this method can provide insights into what parameters of system have the most potential to improve and meaningfully impact performance. For sustainable aviation technologies, a top-down approach has yielded insight and narrowing of potential pathways toward achieving net zero by 2050. Adding this type of approach to analyses can provide insight for predicting whether technologies may actually achieve performance goals set by the industry and what investments may make the most impact in improving performance. As sustainable aviation gains momentum, quantitative tools such as this one will be needed to make strategic technology investment decisions that will change the next generation of aircraft.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Enabling Actionable Maintenance Analytics with Ontology-driven Natural Language Processing</title>
<link href="https://hdl.handle.net/1721.1/151708" rel="alternate"/>
<author>
<name>Pascualy, Gabriel</name>
</author>
<id>https://hdl.handle.net/1721.1/151708</id>
<updated>2023-08-01T04:16:58Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Enabling Actionable Maintenance Analytics with Ontology-driven Natural Language Processing
Pascualy, Gabriel
Asset maintenance is a crucial aspect of biomanufacturing operations, with significant associated costs. To mitigate these expenses, companies are increasingly adopting predictive maintenance approaches that leverage data to identify potential equipment issues before they arise. Maintenance work orders (MWOs) are essential tools for planning, scheduling, and tracking maintenance tasks. Although MWOs contain explicit ontologies with structured fields for information like costs, identifiers, and dates, valuable maintenance details are often stored in unstructured text fields. These fields include work descriptions, troubleshooting information, and completed work summaries. While the unstructured text generally contains an implicit ontology, such as a shared vocabulary and references to externally documented system hierarchies, automatically extracting insights is currently infeasible, necessitating manual analysis by engineers—a process that is not cost-effective at scale.&#13;
&#13;
This project aims to develop an ontology that integrates the explicit and implicit maintenance ontologies found at Amgen, as well as a natural language processing (NLP) tool for reconstructing this unified ontology from MWOs' structured and unstructured fields. By utilizing this unified ontology and NLP tool, we seek to explore the limitations of a post-processing NLP solution and pinpoint future research areas with the potential to enhance downstream analytics.&#13;
&#13;
For each MWO, our tool identifies the maintained assets and their corresponding components, classifies the rationale behind the work order generation, and assigns the problem, cause, and remedy associated with each component.&#13;
&#13;
Tested on a sample of 50 manually labeled MWOs covering maintenance of 188 assets, our tool achieved the following F1 scores across each category: 0.83 for failed assets, 0.46 for rationale, 0.62 for failed components, 0.58 for problems, 0.76 for causes, and 0.62 for remedies. The low F1 scores in some categories can be attributed to the missing context that is normally inferred by a reliability engineer during manual analysis. Despite these limitations, we provided recommendations for extending the explicit ontology to enhance performance and identified maintenance documentation enhancements as a potential area of future research.&#13;
&#13;
Furthermore, we employed the tool to examine 1000 leak records, showcasing the significance of a unified ontology in generating accurate baselines. Overall, our findings suggest that the analytics generated by this tool could substantially reduce the time engineers spend analyzing work order data, offering unprecedented insights into plant maintenance operations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Outside Inside, Inside Around : Leveraging External Innovation Through Strategic Investment</title>
<link href="https://hdl.handle.net/1721.1/151707" rel="alternate"/>
<author>
<name>Kramer, Jomi S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151707</id>
<updated>2023-08-01T03:41:20Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Outside Inside, Inside Around : Leveraging External Innovation Through Strategic Investment
Kramer, Jomi S.
Because continuous improvement and company growth are fundamental for future business success, companies constantly look for ways to innovate. A large company must operate to augment core business and create industry innovation to succeed. However, decades of literature and company performance reviews have demonstrated that large companies often struggle to develop innovative products or to foresee disruptive technology and capture the market quickly and nimbly. &#13;
&#13;
This thesis examines how a large company can effectively leverage external innovation for internal success. In an effort to stimulate the ideation process and internalization of bold innovations, a company can implement a Corporate Venture Capital (CVC) Team to evaluate, transition, and develop external innovation for internal company growth. A successful CVC understands the needs of the parent company, captures external venture capital (VC) opportunities, and facilitates the transition of new technology to support core business growth and develop industry innovation. By investigating the pathways through which innovation ideas evolve from a concept to fully integrated products, it is apparent that each method has its own merits and challenges. However, with a sound operating strategy, a large company can leverage the strengths of strategic investments. A CVC with an established and scalable process can facilitate the exploration and implementation of external innovation. Furthermore, the revelation that a CVC is essentially an internal sales team geared towards internal stakeholders provides a new framework for CVC teams to effectively engage internal stakeholders and portfolio companies to capitalize on external innovation for mutually beneficial growth opportunities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Driving Growth Through Sales and Operations Planning, Inventory Management, and Supply Chain Expansion</title>
<link href="https://hdl.handle.net/1721.1/151706" rel="alternate"/>
<author>
<name>Cass, Gregory</name>
</author>
<id>https://hdl.handle.net/1721.1/151706</id>
<updated>2023-08-01T04:23:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Driving Growth Through Sales and Operations Planning, Inventory Management, and Supply Chain Expansion
Cass, Gregory
Exponential sales growth creates unique challenges for manufacturing companies. An increasing backlog directly increases lead times unless production capacity can be scaled at the same rate. Lead times greater than the industry average threaten future growth as customers begin choosing substitutes that can arrive sooner.&#13;
&#13;
This research explores the hypothesis that production throughput can be increased through improved business processes. The investigation uses ShopSabre CNC as a case study to explore techniques for developing new processes, such as sales and operations planning (S&amp;OP), inventory modeling, and supply chain capacity, flexibility, and resilience analyses. The research effectiveness is measured by the observed changes in throughput and financial metrics.&#13;
&#13;
The research illustrates that all the investigated techniques contributed materially to either increasing production throughput or improving financial outcomes. The primary source of the observed impacts was improving stakeholder alignment that was limiting growth. The recommendations focus on improving quality and integrating disciplined validation into an established culture valuing urgent adaptation. Inventory model assumptions and backlog dynamics were identified as exciting potential follow-on research opportunities.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Her Playing Eye: Courtesans at Chess in the Book of Games (c. 1283/84)</title>
<link href="https://hdl.handle.net/1721.1/151705" rel="alternate"/>
<author>
<name>Nansi, Khushi</name>
</author>
<id>https://hdl.handle.net/1721.1/151705</id>
<updated>2023-08-01T04:10:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Her Playing Eye: Courtesans at Chess in the Book of Games (c. 1283/84)
Nansi, Khushi
In the medieval educational codex the Book of Games: Chess, Dice, and Tables (Libro de los Juegos) completed in Seville in 1283, there is more than what meets the eye. Featuring some hundred chess problems, another hundred board and dice games, each accompanied by miniatures depicting games at play, players sit across the board from each other. Enclosed, frozen on the frame in the illustrations, men sit against women, kings and queens, women and women, courtesans and knights, monks and men, nuns and children, young and old—a range of players of different faiths and backgrounds.&#13;
&#13;
This thesis examines the covert complexity of women’s relationships in the thirteenth-century Castilian court of Alfonso X, el Sabio (1221-84), through their representations at games of chess. Chess in the medieval imaginary was a game not only strategic, but one also laden with sexual connotations. It mirrored the site of battle and the court—the composite of a series of moves—it replicated the advance of courtship and seduced the mind for the forfeit of a hand. Medieval epics and material culture visualize this phenomenon: when a man and a woman are represented at chess, it is read as a game between lovers. In the Book of Games, what is going on between women—for whom the archive always limited and fragmentary—what have our eyes missed? To explore this question, this thesis represents a necessary exercise in speculation. It begins with a review of the state of the discussion upon the manuscript in question, delving into the various threads of movement encapsulated within, to query the notion of autonomy in making. Through a close reading of key illustrations bearing a trace of personal reception, it probes the central methodological question of seeking to see, theorizing gaze and nazar in sites of potential encounter. Understanding the encounter, and alternate forms of intimacy made possible through play, I observe the women looking at each over the chessboard in a moment of mutual regard. This thesis argues the Book of Games possesses an already existing unseen complexity—perhaps queer or perhaps questioning—lying latent, that we must learn to seek to see, looking otherwise.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Min Bʻīd la-Bʻīd (from Far to Far): On Homemaking under Diasporic Conditions</title>
<link href="https://hdl.handle.net/1721.1/151704" rel="alternate"/>
<author>
<name>BuGhanem, Luna</name>
</author>
<id>https://hdl.handle.net/1721.1/151704</id>
<updated>2023-08-17T04:11:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Min Bʻīd la-Bʻīd (from Far to Far): On Homemaking under Diasporic Conditions
BuGhanem, Luna
In the villages of Mount-Lebanon, homes are realized through years of immigrants’ exchanged remittances, messages, objects, and visits. This thesis offers an expanded understanding of the homes and homemaking of diasporic families, through which they manage separation and fragmentation and adapt to personal and regional political change.&#13;
&#13;
To reveal how diasporic subjects build and make their homes while and from abroad or back and forth between locales, I extract from my conversations with owners of remittance-funded houses in ’Aley and Shūf. In reconstructed videos of the in-progress homes, first-hand accounts, concurrent global events, and the material traces of migration are juxtaposed, making the relationships between distance, its mediations, and the built form apparent.&#13;
&#13;
As a result, several signature architectural concepts are re-imagined. In the first chapter, “site” is no longer understood to simply be location, where the building is bound by coordinates or where owners have to be physically present; site, as captured through WhatsApp images, is dispersed, thus becoming the captured change that occurs throughout the conception and construction of their homes. Each following chapter similarly re-imagines and expands our use of architectural concepts such as “budget,” “program,” “phases,” “finishes,” “furniture, fixtures,” and “contracts” to suggest how we may appropriate this new understanding as a design tool.&#13;
&#13;
Ultimately, this thesis establishes the human experience of immigrant-builders as not ancillary but central to the discipline of architecture.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Physics-based Estimates of Structural Material Quantities for Urban-level Embodied Carbon Assessment in Buildings</title>
<link href="https://hdl.handle.net/1721.1/151703" rel="alternate"/>
<author>
<name>Sory, Leïlah Yadia Kelly</name>
</author>
<id>https://hdl.handle.net/1721.1/151703</id>
<updated>2023-08-01T03:02:30Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Physics-based Estimates of Structural Material Quantities for Urban-level Embodied Carbon Assessment in Buildings
Sory, Leïlah Yadia Kelly
Decarbonizing the built environment requires immediate actions to meet global climate targets. The world population growth and rapid urbanization rate add to the urgency of this challenge. In fact, buildings account for about 40% of all energy and carbon emissions from operations and materials’ production and construction processes. More specifically, buildings’ structural systems are responsible for a significant share of the upfront embodied carbon emissions before construction. Most LCA tools focus on fully detailed material takeoffs from high-resolution Building Information Models (BIM) and are therefore incomplete during conceptual design. Moreover, Urban building energy modeling (UBEM) is a proven technique allowing cities to evaluate technology pathways to achieve their net-zero emissions goals. It involves simplified building archetypes to estimate operational energy on a large scale with reasonable accuracy. However, little attention has been paid to urban-level embodied carbon assessment.&#13;
&#13;
Therefore, this thesis investigates the potential of implementing physics-based structural quantities estimation in early-stage design for embodied carbon quantification at the urban scale. This approach combines bottom-up engineering calculations with data-driven surrogate modeling to automatically predict embodied carbon from a high-fidelity model. Finally, structural parameters are defined into energy model archetypes to deploy this method into an existing urban scale modeling tool. The feasibility of the proposed methodology is assessed through case studies to estimate embodied carbon and energy use intensities at the individual-building and urban scales. Results show the benefits of spatially mapping the distribution of embodied and operational carbon in the building stock and obtaining more nuanced estimates of carbon emissions compared with existing benchmarking studies. The primary use case of this work is to better inform planning and policy decision-making for retrofitting strategies and future building design.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Re-Alimentaciones-Cruzadas: Procesos de Re-imaginación entre Epistemologías Acústicas/Cross-Feedback: Re-Imagining processes between Acoustemologies</title>
<link href="https://hdl.handle.net/1721.1/151702" rel="alternate"/>
<author>
<name>García Belmont, Cristóbal Herman</name>
</author>
<id>https://hdl.handle.net/1721.1/151702</id>
<updated>2023-08-01T03:20:56Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Re-Alimentaciones-Cruzadas: Procesos de Re-imaginación entre Epistemologías Acústicas/Cross-Feedback: Re-Imagining processes between Acoustemologies
García Belmont, Cristóbal Herman
This compilation of texts actively revisits and develops poetical mythologies through notions of reverberation and feedback to transcend juxtapositions. Re-imagining two mythological snakes, the Amphisbaena, and the Ouroboros, the reader is invited to revalue their placement in intricate post-colonial realities by imagining metaphysical circuits connecting different spaces and times. The work presents a series of contrasting essays reflecting the mouths and body of the Amphisbaena, the two-headed snake.  The first essay focuses on the littoral region of Perú during the transition from a viceroyalty into a republican state. Here we find a shapeshifting musical tradition engaging with percussive idiophonic—or self-resonating— instruments. Traditions get confined into the visual realm to capture a new multicultural identity.  Lost in transduction, how, via sound, does one create circuits which transform bodies, space, and time through reverberation? The second essay narrates the history of acoustic feedback as the birth of an ouroboric, selfeating cycle. By deconstructing/reconstructing a series of artworks, the text becomes a tale of metamorphosis of the ouroboros, getting into notions of active practice, later into an amphisbaena regarding notions of material resonance, and finally into concrete poetry in the processes of analyzing the development of consciousness around the artwork.  The work ties two sonic practices from different times, cultures, and locations by building sculptural self-resonators activated by auditory feedback. The repercussions of transforming spatial configurations via sound prove that sonic practices can alter how we approach our nomadic circulations. The sound created by contrasts blurs the borders of sense and perception, giving us a space for subjective interpretation and leading to new imaginaries.  This work reflects sonic feedback—dichotomous diaphragms, some idiophonic, some membranous, that have learned via the artwork to resonate together, accentuating relations, creating circuits, and shortening distances that once seemed far away.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Improving the temporal consistency of satellite-based contrail detections using ensemble Kalman filtering</title>
<link href="https://hdl.handle.net/1721.1/151697" rel="alternate"/>
<author>
<name>Robion, Louis A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151697</id>
<updated>2023-08-01T03:10:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Improving the temporal consistency of satellite-based contrail detections using ensemble Kalman filtering
Robion, Louis A.
Condensation trails or contrails are line-shaped ice clouds which can form behind aircraft, and current estimates indicate that they account for the majority of aviation’s climate impacts. While contrail models exist to estimate these effects, a lack of experimental or observational data makes them difficult to validate.&#13;
&#13;
This thesis develops a method for retrieving large scale temporally consistent observations of contrails using satellite imagery. Having a consistent history of detections of an individual contrail is necessary to accurately derive observational constraints on contrail properties such as lifetime. Inconsistencies not only reduce the quality of such a dataset, but risk introducing biases in the computed properties.&#13;
&#13;
We use an existing deep-learning based contrail detector which as of now presents temporal inconsistencies that make tracking challenging. We address this issue by post-processing the model’s outputs with an ensemble Kalman filter. We create a hand-labeled dataset of 73 contrails tracked over a 2-hour time series which we use to quantify performance. We find that by adding temporal correlations, we are able to recover 53.25% of contrail pixels on an image, and that 53.25% of the pixels predicted as contrail by the detection framework are indeed contrail pixels. For individual contrail tracks, we find that after filtering, we increase the average duration of consecutive consistent contrail detections from 9.4 minutes to 25.7 minutes. On average, the duration of these consistent contrail detections after filtering represent 43.7% of a contrail’s total lifetime compared to only 15.5% for the baseline. We also find that the high frequency Fourier components of the signal, which are responsible for flickering and noise, are reduced by 50% in magnitude.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed estimation algorithms for autonomous&#13;
systems</title>
<link href="https://hdl.handle.net/1721.1/151695" rel="alternate"/>
<author>
<name>Oneci, Codrin P.</name>
</author>
<id>https://hdl.handle.net/1721.1/151695</id>
<updated>2023-08-01T03:56:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Distributed estimation algorithms for autonomous&#13;
systems
Oneci, Codrin P.
This thesis investigates the theory and applications of distributed estimation algorithms. It was found that for specific objective functions, general meshes of distributed agents may estimate a state while maintaining a consensus over its PDF and satisfying communication/localization objective constraint at the same time. Hyperparameter tuning techniques for multiple algorithms preventing RMSE drift and communication network inefficient usage are described. An example of distributed algorithm application with rovers shows the power of such algorithms in robotics.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Scenario Analysis of Profitability through Simulation of Different Business Contract Models</title>
<link href="https://hdl.handle.net/1721.1/151693" rel="alternate"/>
<author>
<name>Heintz, Lauren</name>
</author>
<id>https://hdl.handle.net/1721.1/151693</id>
<updated>2023-08-01T04:03:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Scenario Analysis of Profitability through Simulation of Different Business Contract Models
Heintz, Lauren
Many American manufacturing companies have faced supply chain distruption, and inflation on sourced goods, freight, and labor. Coupled with the growth of online retail and direct-to-consumer shipping trends, many businesses have had to rethink strategic partnerships and distribution models. These factors have incentivized the adult incontinence manufacturer "IncoMan" to seek out strategic partnerships with other businesses to reduce costs. The reimbursed healthcare market specifically has seen a decline in profitability. State-mandated reimbursement rates for products are inconsistent across the country, but have been consistently declining. Insurance agencies acting in the middle have further eroded margins. To continue to provide these necessary medical products, this incontinence manufacturer and distributor explores contract options with other business partners to leverage both companies’ strengths and maximize profitability in this market. This specific application of financial modeling and scenario analysis helps quantify the risk between two different possible contract models, a distributor model and a service model. Furthermore, it takes into account the uncertainty in demand parameters via a quasi-Monte Carlo simulator. The result is a set of visualizations that can be used to analyze both models under both deterministic and stochastic scenarios. The most influential factors in profitability stem from the state-mandated reimbursement price and the insurance agency contracts. Further, customer revenue-per-order and labor cost-to-serve each customer highly impacts profitability in both models. Of the two contract models simulated, the distributor model is more risky than the service model, but the service model lacks growth potential. The simulator can be reused and customized to different ranges of data and inputs, depending on the customer engagement. Ultimately, the goal is provide business leaders with a snapshot the first-order factors in any new contract agreement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Behavioral Design for Emotional Intelligence: Leveraging Affective Computing in Medical Education for Improved Care for Substance Use Disorders</title>
<link href="https://hdl.handle.net/1721.1/151692" rel="alternate"/>
<author>
<name>Daulbayeva, Aidana</name>
</author>
<id>https://hdl.handle.net/1721.1/151692</id>
<updated>2023-08-01T03:08:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Behavioral Design for Emotional Intelligence: Leveraging Affective Computing in Medical Education for Improved Care for Substance Use Disorders
Daulbayeva, Aidana
The rise in opioid use has led to a significant increase in overdose deaths since 1999. Negative attitudes and stigma from doctors towards opioid patients can exacerbate the situation, resulting in under treatment, poor communication, and labeling. Stigma can also be expressed through one's affective states, where facial expressions may unintentionally convey negative emotions or judgments.&#13;
&#13;
To address this issue, this thesis aims to introduce medical trainees to Medship, an affective computing tool that promotes self-reflection about one’s facial expression and raises awareness about stigma, while also filling the gap in medical training. The focus is on changing human behavior without triggering the backfire effect on busy physicians. This will be accomplished by combining theories of behavioral design and using affective computing as a backbone for creating the app.&#13;
&#13;
The Medship project is a joint effort between the Affective Computing group at the MIT Media Lab and Cornell Weill Medicine, with funding support from the Foundation for Opioid Response Efforts. The ultimate goal is to integrate this project into the medical student curriculum and eventually improve the quality of care for substance use disorder patients.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>From Bench to Bucks: An Approach and Case Study in Scaling Additive Research and Development Technologies within the Aerospace Industry</title>
<link href="https://hdl.handle.net/1721.1/151690" rel="alternate"/>
<author>
<name>Smedberg, Allison R.</name>
</author>
<id>https://hdl.handle.net/1721.1/151690</id>
<updated>2023-08-01T03:04:27Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">From Bench to Bucks: An Approach and Case Study in Scaling Additive Research and Development Technologies within the Aerospace Industry
Smedberg, Allison R.
Scaling technology-focused companies is a unique challenge, as taking cutting-edge technology from the lab to consumer markets often requires significant R&amp;D work in parallel with all the typical challenges that any start-up faces. This thesis presents a framework for technology-focused companies to approach this crucial scaling period. The approach is centered on a tool called the House of Quality (HOQ), which is designed to help prioritize design features of consumer products. By defining a Holistic House of Quality (HHOQ) that includes company-wide capabilities and auxiliary functions, and applying HHOQs to company growth, this thesis explores whether HHOQs can help guide scaling decisions for companies in areas like manufacturing operations, organization structure and hiring, and trade-offs between short-term and long-term needs.&#13;
&#13;
This thesis explores the HHOQ scaling framework through the lens of Wingate, a technology-centered company in additive manufacturing that focuses on the material development and printing of high-temperature metals. Wingate had notably strong customer relationships and a technically superior product to competitors, and was facing the challenge of rapidly scaling operations to meet customer demand. The HHOQ process and scaling efforts were implemented and observed from January to August of 2022.&#13;
&#13;
During the timeframe of the research, Wingate grew headcount from 4 employees to 10 employees, reduced overdue customer backlog by 46% and increased on-time delivery by 15%. The HHOQ framework proved useful in providing a structured way to assess scaling efforts in relation to customer needs, and successfully painted a picture of what other auxiliary functions would be important besides the success of the technology itself. This thesis is anticipated to be a starting point for more wide-spread consideration of HHOQs as a tool in scaling decisions, including the effectiveness of the framework over longer time horizons and across various industries.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Economic Analysis of 3D-Printed Ceramic Cores for GasTurbine Investment Castings</title>
<link href="https://hdl.handle.net/1721.1/151689" rel="alternate"/>
<author>
<name>Maristany, Eduardo</name>
</author>
<id>https://hdl.handle.net/1721.1/151689</id>
<updated>2023-08-01T03:10:56Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Economic Analysis of 3D-Printed Ceramic Cores for GasTurbine Investment Castings
Maristany, Eduardo
When manufacturing blades and vanes for gas turbine engines, internal cooling channels are formed by investment casting with sacrificial ceramic cores. The hot injection and pressing techniques traditionally used to manufacture ceramic cores have long lead times and high up-front costs, motivating an interest to form cores via additive manufacturing. While industrial additive manufacturing technologies enable a faster and more iterative ceramic core manufacturing process, this efficiency comes with high per unit manufacturing costs of additive methods. To determine the quantities for which additive manufacturing is more economical than traditional hot injection and pressing methods, an economic analysis of the aircraft engine and investment casting markets is conducted. Current technical capabilities of several additive methods for forming ceramic cores are compared and Stereolithography and Digital Light Processing are found to be suitable for ceramic core manufacturing. The economic advantage of using viable additive manufacturing methods is assessed using publicly available financial, maintenance, and aircraft fleet size data in a manufacturing cost model. When considering user experience curve effects, the model shows that for a single core design, additive manufacturing is economical at quantities below 1,900 cores, or about 16 high pressure turbine stage sets. When considering multiple core designs needed to satisfy demand across all commercial aircraft in use, the model shows that additive manufacturing is economical at quantities below 720,000 cores, or 14% of the total core market demand in 2019. This motivates the use of additive manufacturing for new core design development and testing as well as the maintenance of older engine designs while reaffirming the use hot injection and pressing techniques for production level manufacturing and maintenance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>External Network Manufacturing Capacity Design and Procurement in the Pharmaceutical Industry</title>
<link href="https://hdl.handle.net/1721.1/151688" rel="alternate"/>
<author>
<name>Hoxha, Ori</name>
</author>
<id>https://hdl.handle.net/1721.1/151688</id>
<updated>2023-08-01T04:00:14Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">External Network Manufacturing Capacity Design and Procurement in the Pharmaceutical Industry
Hoxha, Ori
Pharmaceutical companies typically manufactured therapeutic drugs internally; however, the increased rate of innovation in the last decade has significantly pushed the balance toward external manufacturing. Moreover, the consolidation of contract manufacturing organizations (CMOs) has generated a need for new operating models between the parties. In this thesis we study the supply chain performance due to manufacturing asset cross-validations and the contract structure of the capacity reservation agreement. Our goal is to identify ways of working between pharmaceutical companies and CMOs that close the flexibility gap between a fully internal supply chain and one with external stages, while maintaining the cost advantages of the latter. &#13;
&#13;
We assess small-molecule manufacturing capacity constraints using a linearized, multi-objective optimization model. Stochastic simulations reveal that connecting all products and assets in the portfolio through cross-validations reduces the supply shortfall by 6%, while limiting the impact on a single product. However, in order not to be performance-constrained by the need to have net zero demand fluctuations across all products in the typical 24-month window between ordering and delivery, we propose an option contract-driven capacity reservation model. Such a procurement construct allows the pharmaceutical company to hedge against potential demand downside, while making potential upside accessible at constant cost of goods sold. Moreover, option contracts allow the CMO to increase its 10-year net present value per contract by 50%, at a reduced effective order lead time of 12 months.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Integrated Energy Modeling Tool for Electric and Gas Infrastructure Decision Support</title>
<link href="https://hdl.handle.net/1721.1/151687" rel="alternate"/>
<author>
<name>Galindez de Jesus, Francisco J.</name>
</author>
<id>https://hdl.handle.net/1721.1/151687</id>
<updated>2023-08-01T03:45:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Integrated Energy Modeling Tool for Electric and Gas Infrastructure Decision Support
Galindez de Jesus, Francisco J.
This dissertation compares the total yearly cost to customers of a gas utility company fully electrifying heat with a focus on infrastructure costs versus utilizing hydrogen blending, taking into account the current cost of green hydrogen. Previous research has separately discussed implications and costs of both hydrogen blending and electrification. The former leading to increased safety risks at high blend rates, and minimal or low additional risk at low blend rates. The latter showing strong decarbonization capabilities, but not comparing the two directly in a case study format.&#13;
&#13;
The infrastructure costs associated with fully electrifying heat are substantial, including the installation of heat pumps and the associated electrical infrastructure. In contrast, the infrastructure costs associated with hydrogen blending are relatively low. We use 2022 company and customer data to model the cost to upgrade infrastructure to support the additional imposed electric load due to electrification of heating. This cost is aggregated to the energy cost for this new method of heating, taking into consideration energy transformation losses. While not a factor to cost, risks imposed by hydrogen blending are analyzed as a "go no-go" criteria. The paper also looks at the thermodynamic compatibility of hydrogen blends with existent natural gas systems and piping. &#13;
&#13;
Our analysis suggests that hydrogen blending is likely to result in a lower cost-to customer for utilities looking to decarbonize their heating systems. While the current cost of green hydrogen is high, it is expected to decrease with further adoption of hydrogen. Moreover, the gradual transition facilitated by hydrogen blending can minimize the overall cost impact on customers. We find that risk imposed by hydrogen blending can be mitigated at the target blending rate of 20%, however margins to risks such as fires, explosions, and pipeline brittle fracture are reduced. In conclusion, the decision between fully electrifying heat and utilizing hydrogen blending as a means of decarbonizing heat requires careful consideration of the associated costs, risks, andhow it helps to achieve company strategy. Our findings have important implications for company executives, who can use this information to determine how the customerwill be affected by major strategy decisons, just one aspect to be considered out of many before making the final decision for a given city or region.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Inference of the Novel Coronavirus 2019 in Patients fitted with Boston Scientific Medical Hardware</title>
<link href="https://hdl.handle.net/1721.1/151686" rel="alternate"/>
<author>
<name>Ayane, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151686</id>
<updated>2023-08-01T03:57:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Inference of the Novel Coronavirus 2019 in Patients fitted with Boston Scientific Medical Hardware
Ayane, Daniel
As Boston Scientific’s Rhythm Management business is challenged by an increasingly commoditized market it is important to find new opportunities to diversify the products and services that the company offers. Traditionally, as a medical device manufacturing company, this differentiation comes in the form of hardware features but in the wake of a data revolution, the company seeks opportunities to diversify beyond hardware.&#13;
&#13;
By utilizing Boston Scientific’s physiological time series data from Heart Failure therapy devices such as pacemakers, we aim to determine if an algorithm can be built to anticipate worsening COVID-19 symptoms in real time in patients and therefore provide them with better healthcare solutions by intervening in a timely manner.&#13;
&#13;
Since the study includes a relative small number of patients with clinically established COVID-19 labels, we leverage the power of semi-supervised learning to extract useful signals and characterize the profile of COVID-19 in Boston Scientific Heart Failure patients. Specifically, we utilize constrained K means clustering to understand if there are any cardiovascular signals that are associated with COVID-19 in heart failure patients and then create pseudo labels that can be used to train an LSTM in a supervised fashion. We produce two models with the best model achieving a median alert rate of 3.8 Days with an unexpected alert rate of 3.8% and 93.3% specificity and a 99.7% sensitivity.&#13;
&#13;
This study is meant to be a proof of concept to help define a future product that can be rolled out across Boston Scientific’s LATITUDE product line.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performance Enhancements to Visual-Inertial SLAM&#13;
for Robots and Autonomous Vehicles</title>
<link href="https://hdl.handle.net/1721.1/151684" rel="alternate"/>
<author>
<name>Abate, Marcus</name>
</author>
<id>https://hdl.handle.net/1721.1/151684</id>
<updated>2023-08-01T03:23:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Performance Enhancements to Visual-Inertial SLAM&#13;
for Robots and Autonomous Vehicles
Abate, Marcus
Spatial perception is a key enabler for effective and safe operation of robots and autonomous vehicles in unstructured environments. Two key components of a complete spatial perception system are: identifying where the robot is in space, and constructing a representation of the world around the robot. In this thesis, we study Visual-Inertial Simultaneous Localization and Mapping (VI-SLAM) and present several findings on its application to a variety of robotic platforms to obtain globallyconsistent localization for a robot as well as a dense map of its surroundings. In particular, we extend Kimera, an open-source VI-SLAM pipeline, to be more effective in traditional use-cases (e.g., stereo-inertial VI-SLAM) as well as more broadly applicable to different platforms and sensor modalities.&#13;
&#13;
Our first contribution is to present a system built around Kimera for autonomous valet parking of self-driving cars, and test on real-world self-driving car datasets. This system uses a modified version of Kimera to support multi-camera VI-SLAM and perform dense free-space mapping using multiple cameras with non-overlapping field of view. Our second contribution is to describe recent updates to Kimera and showcase their beneficial effect on localization and mapping performance, while also comparing against the state of the art on extensive datasets collected on a variety of platforms. Finally, we present a novel method for detecting and tracking humans in the scene in order to build 3D Dynamic Scene Graphs for high-level perception tasks, and evaluate our method in a photorealistic simulation environment. We conclude by commenting on the advantages of Kimera and identifying areas for future work.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Towards Coupled Nonhydrostatic-Hydrostatic Hybridizable Discontinuous Galerkin Method</title>
<link href="https://hdl.handle.net/1721.1/151681" rel="alternate"/>
<author>
<name>Saravanakumar, Aditya Karthik</name>
</author>
<id>https://hdl.handle.net/1721.1/151681</id>
<updated>2023-08-01T04:04:10Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Towards Coupled Nonhydrostatic-Hydrostatic Hybridizable Discontinuous Galerkin Method
Saravanakumar, Aditya Karthik
Numerical modelling of ocean physics is essential for multiple applications such as scientific inquiry and climate change but also renewable energy, transport, autonomy, fisheries, water, harvesting, tourism, communication, conservation, planning, and security. However, the wide range of scales and interactions involved in ocean dynamics make numerical modelling challenging and expensive. Many regional ocean models resort to a hydrostatic (HS) approximation that significantly reduces the computational burden. However, a challenge is to capture and study local ocean phenomena involving complex dynamics over a broader range of scales, from regional to small scales, and resolving nonlinear internal waves, subduction, and overturning. Such dynamics require multi-resolution non-hydrostatic (NHS) ocean models. It is known that the main computational cost for NHS models arises from solving a globally coupled elliptic PDE for the NHS pressure. Optimally reducing these costs such that the NHS dynamics are resolved where needed is the motivation for this work.&#13;
&#13;
We propose a new multi-dynamics model to decompose a domain into NHS and HS dynamic regions and solve the corresponding models in their subdomains, reducing the cost associated with the NHS pressure solution step. We extend a high-order NHS solver developed using the hybridizable discontinuous Galerkin (HDG) finite element methodology by taking advantage of the local and global HDG solvers for combining HS with NHS solvers. The multi-dynamics is derived, and the first version is implemented in the HDG framework to quantify computational costs and evaluate accuracy using several analyses. We first showcase results on Rayleigh Taylor instability-driven striations to evaluate computational savings and accuracy compared to the standard NHS HDG and finite-volume solvers. We highlight and discuss sensitivities and performance. Finally, we explore parameters that can be used to identify domain regions exhibiting NHS behaviour, allowing the algorithm to dynamically evolve the NHS and HS subdomains.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Off-Lattice Kinetic Monte Carlo Framework&#13;
For Long-Time Atomistic Simulations</title>
<link href="https://hdl.handle.net/1721.1/151680" rel="alternate"/>
<author>
<name>Luzzatto, Julien L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151680</id>
<updated>2023-08-01T03:01:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An Off-Lattice Kinetic Monte Carlo Framework&#13;
For Long-Time Atomistic Simulations
Luzzatto, Julien L.
The goal of this thesis is to develop an off-lattice Kinetic Monte Carlo (KMC) framework to simulate the atomistic dynamics of materials at extreme conditions over long time scales. Despite the dramatic increase in computational power over the last few decades, rigorous approaches such as classical Molecular Dynamics (MD) techniques cannot access the engineering and experimental time scales due to the fundamental scaling limitation constrained by atomic vibrations. KMC approaches are powerful stochastic computational techniques that focus on the simulation of rare atomistic events in order to analyze the coarse-grained dynamics of condensed matter systems and replicate non-equilibrium phenomena in a statistical fashion. However, their application to problems at extreme conditions — such as those encountered in materials science under high pressure, temperature, and radiation — has been limited by the complexity of atomistic interactions, by the variability and instability of underlying structures, and by the computational cost of simulating large systems over sufficiently long time-scales.&#13;
&#13;
To address such challenges, this thesis proposes an off-lattice, modular and scalable KMC framework that features adaptive inferred structures, efficient process sampling and dynamic rate constant calculations, together with the corresponding Julia implementation. The developed KMC framework is justified theoretically, described step-by-step methodologically, and then validated against MD results for early-time dynamics.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Hybrid Testing: Combining Static Analysis and Directed&#13;
Fuzzing</title>
<link href="https://hdl.handle.net/1721.1/151679" rel="alternate"/>
<author>
<name>Shields, Peyton</name>
</author>
<id>https://hdl.handle.net/1721.1/151679</id>
<updated>2023-08-01T03:51:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Hybrid Testing: Combining Static Analysis and Directed&#13;
Fuzzing
Shields, Peyton
New CVEs are discovered each year and their underlying bugs leave applications vulnerable to exploitation. Software is still frequently written in bug prone languages, e.g. C and C++, and a single missed check during manual testing can result in vulnerabilities. Existing automated testing tools such as fuzzing are limited in scope or in the case of static analysis, have a high false positive rate. Without improved automated testing, it can be challenging for developers to debug large, complex codebases. In this paper, Hybrid Testing is presented as a solution. Hybrid Testing combines static and dynamic analyses, leveraging static analysis to perform complex reasoning about logic, memory management, and concurrency. It creates a novel orchestration system which allows us to automatically verify the output of static analysis tools using directed fuzzing. Hybrid Testing is the first vulnerability detection technique with full codebase coverage and no false positives. It can be seamlessly integrated into the development cycle and scales well to large codebases. This work details the design and implementation of Hybrid Testing and evaluates its performance across a corpus of open-source C and C++ applications in the Magma benchmark. Hybrid Testing aims to promote more secure software through rigorous testing, making it easier for developers to detect security issues. We demonstrate Hybrid Testing can find vulnerabilities up to 25% faster with 17% higher accuracy (when detecting additional bugs) than current automated testing strategies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing BREeze - a High-Performance Regular Expression Library Using Code Generation with BuildIt</title>
<link href="https://hdl.handle.net/1721.1/151678" rel="alternate"/>
<author>
<name>Mitrovska, Tamara</name>
</author>
<id>https://hdl.handle.net/1721.1/151678</id>
<updated>2023-08-01T03:20:13Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Implementing BREeze - a High-Performance Regular Expression Library Using Code Generation with BuildIt
Mitrovska, Tamara
Regular expression matching is a very common problem in software engineering, with applications in text processing, text searching, data scraping, syntax highlighting, deep packet inspection in networks, etc. Due to the varying complexity of regular expressions, having one general approach to match all types of expressions is usually not enough to get the needed performance for software applications. Many modern regular expression engines have tried to solve this problem by combining different algorithms and optimization techniques, which in most cases result in very complicated and large codebases. As a result, we introduce BREeze, a fully functional regular expression library implemented in just around 1500 lines of code with comparable performance to the modern regular expression engines. BREeze is implemented on top of BuildIt, a multi-stage code generation framework that makes it possible to generate high-performance, specialized code while keeping the implementation simple.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Configurable Online Multi-Tiered Storage in&#13;
a Database Management System</title>
<link href="https://hdl.handle.net/1721.1/151677" rel="alternate"/>
<author>
<name>DaCosta, Howard</name>
</author>
<id>https://hdl.handle.net/1721.1/151677</id>
<updated>2023-08-01T03:44:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Configurable Online Multi-Tiered Storage in&#13;
a Database Management System
DaCosta, Howard
Businesses of today produce data items on the order of millions on a daily basis. This is especially true in cloud environments, where much of this data comes in the form of logs and metrics about the performance and status of components in their cloud configurations. Maintaining efficient data storage and retrieval along with growing customer data capacity is very challenging. One reason for this is that newer data tends to be accessed more frequently, while older data needs to be archived for future analysis. Another reason is that maintaining large amounts of data in fast storage disks is very costly. One approach to this problem is a tiered storage system, where new data is allocated to faster storage tiers and older data is pushed to lower tiers with slower retrieval time. This thesis presents a fully online and configurable design and implementation for this in a database management system (DBMS) [1, 2], which has been difficult in the past due to two key constraints: the immutability of its columns and its lack of atomicity for sub-partition level operations. Without atomicity, there are no mechanisms in place that guarantee that a tenant’s data within a partition is moved or deleted completely, which can cause undetermined states that are difficult to identify and resolve. With the immutability of columns, data must be copied and inserted into other tiers, which raises a problem of duplicate data across tiers when a tenant is issuing queries. While these constraints are the exact optimizations that make this particular DBMS so performant for large analytical uses, they are the key features that need to be redesigned in building this system. The proof of concept developed here satisfies all of these requirements with an ingestion rate of 1 TB per day, minimal overhead, and about 70% in projected savings per instance — which could amount to hundreds of thousands of dollars saved per month in large production installations.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Measuring Grit in NFL Cornerbacks using Statistical Analysis</title>
<link href="https://hdl.handle.net/1721.1/151676" rel="alternate"/>
<author>
<name>Kingston, Cole</name>
</author>
<id>https://hdl.handle.net/1721.1/151676</id>
<updated>2023-08-01T03:01:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Measuring Grit in NFL Cornerbacks using Statistical Analysis
Kingston, Cole
Using the pass play tracking data from the 2018 National Football League (NFL) season, I compiled a Grit Score that measured cornerback responses to an adverse result to a play. I calculated this Grit Score using the results of whether a cornerback allowed their opposing receiver to catch the ball to measure change in performance. When comparing performance, I used the difference in average distance between the cornerback and opposing receiver to compile one score for each player in the NFL. I validated my calculations with Pro Football Focus Coverage Ratings and was able to classify players into 6 different categories based on talent and Grit Score. Overall, I found that most NFL players have high grit, or play consistently through adversity, which explains why they have made it to the highest level of football. NFL coaches and general managers prefer players who have increased performance following a bad event as those players tend to stay in the NFL for longer than those with decreased performance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Bridging the Gap Between Real-time Video and Backlogged Traffic Congestion Control</title>
<link href="https://hdl.handle.net/1721.1/151675" rel="alternate"/>
<author>
<name>Karimi, Pantea</name>
</author>
<id>https://hdl.handle.net/1721.1/151675</id>
<updated>2023-08-01T04:03:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Bridging the Gap Between Real-time Video and Backlogged Traffic Congestion Control
Karimi, Pantea
Real-time video applications, such as video conferencing, have become essential to our daily lives, and ensuring reliable and high-quality video delivery in the face of network fluctuation and resource constraints is critical. However, video congestion control algorithms have been criticized for their sub-optimal performance in managing network congestion and maintaining satisfactory video quality and latency. At the&#13;
same time, state-of-the-art congestion control algorithms have demonstrated remarkable performance improvements, effectively addressing network congestion challenges and enhancing the overall quality of data transmission. In this work, we first demonstrate why there is such a gap between the performance of congestion control schemes&#13;
on backlogged flows compared to real-time video streams. Second, we present Dumbo, a design for reshaping the video traffic to look like backlogged traffic, thus enabling state-of-the-art delay-sensitive congestion control algorithms for real-time video. We implemented Dumbo atop WebRTC and evaluated it on emulated network conditions&#13;
using real-world cellular network traces. Our results show that Dumbo in comparison with GCC achieves a 1.5 dB improvement in PSNR, 1.6 dB improvement in SSIM, 100 ms lower frame latency, 35x faster convergence time, 16% increase in the video bitrate, 32% increase in network utilization, and 4x reduction in the network queueing delay.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Characterizing the Energy Requirement of&#13;
Computer Vision</title>
<link href="https://hdl.handle.net/1721.1/151673" rel="alternate"/>
<author>
<name>Edelman, Daniel</name>
</author>
<id>https://hdl.handle.net/1721.1/151673</id>
<updated>2023-08-01T03:09:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Characterizing the Energy Requirement of&#13;
Computer Vision
Edelman, Daniel
The energy requirements of neural network learning are growing at a rapid rate. Increased energy demands have caused a global need to seek ways to improve energy efficiency of neural network learning. This thesis aims to establish a baseline on how adjusting basic parameters can affect energy consumption in neural network learning on Computer Vision tasks. I catalogued the effects of various adjust adjustment from simple batch size adjustment to more complicated hardware configuration (such as power capping). Findings include that adjusting from single precision model to a mixed precision model can result in energy reductions of nearly 40%. Additionally power capping the GPU can reduce energy cost by an additional 10%.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>An Introductory Low-level Programming Course for&#13;
Students with a Python Background</title>
<link href="https://hdl.handle.net/1721.1/151672" rel="alternate"/>
<author>
<name>Quaratiello, Grace</name>
</author>
<id>https://hdl.handle.net/1721.1/151672</id>
<updated>2023-08-01T04:08:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">An Introductory Low-level Programming Course for&#13;
Students with a Python Background
Quaratiello, Grace
The study of C and assembly language can provide valuable insight about the innate nature of computing systems and higher level programming languages. However, before September 2022, the MIT Department of Electrical Engineering and Computer Science (MIT EECS) had not required students to take any class that covers this material and these relationships. The classes included in the introductory programming sequence taken by most MIT EECS students place a stronger emphasis on high-level languages such as Python, which abstract away the interactions that a program must have with memory. Previously, if C had been introduced in an introductory-level class, it was one of several simultaneous concepts being taught to the students and therefore was not explored in depth. In September 2022, MIT EECS revised the class requirements for two of its degrees, Electrical Engineering and Computer Science (Course 6-2) and Computer Science and Engineering (Course 6-3) [1] to require a six-unit introductory course that focuses on low-level programming using C and assembly language. This thesis focuses on the establishment of this introductory low-level programming class intended for students positioned early in the EECS curriculum. Students taking this class study C and assembly language so that they can enter later coursework with both the ability to use these programming languages and a basic understanding of computing systems and associated constraints.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Distributed Monte Carlo Tree Search With Applications To Chip Design</title>
<link href="https://hdl.handle.net/1721.1/151671" rel="alternate"/>
<author>
<name>Jones, Cooper</name>
</author>
<id>https://hdl.handle.net/1721.1/151671</id>
<updated>2023-08-01T03:13:29Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Distributed Monte Carlo Tree Search With Applications To Chip Design
Jones, Cooper
Monte Carlo Tree Search is a classic method in AI that builds up a search tree asymmetrically using random rollouts on a game tree. The work detailed in this thesis expands upon traditional implementations by allowing the capability of fully distributing each node onto different physical machines while enabling them to keep in constant communication. The ability to distribute work to other machines is a highly desirable capability that will allow users to save on single computer resources, enable an almost arbitrary level of scaling, and allow for the processing of states which previously would have been too large to run on a single computer realistically. When applied to the problem of automating the design of Printed Circuit Boards (PCB) from just a list of desired board specifications, this fully distributed search will allow increased search breadth and depth. This expands the computational limits of each action applied to the state, increasing the probability of finding an improved final state when compared to running the search on one physical machine. In this thesis, we discuss our motivating problem and the infrastructure changes necessary to enable this capability increase. We show results highlighting the potential improvements these changes will have on the process of generating a PCB design and identify significant areas for improvement.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fair, Robust, and Calibrated Deep Learning with Heavy-Tailed Subgroups</title>
<link href="https://hdl.handle.net/1721.1/151670" rel="alternate"/>
<author>
<name>Hampton, Lelia Marie</name>
</author>
<id>https://hdl.handle.net/1721.1/151670</id>
<updated>2023-08-01T03:19:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Fair, Robust, and Calibrated Deep Learning with Heavy-Tailed Subgroups
Hampton, Lelia Marie
To deploy safe machine learning systems in the real world, we must ensure they are fair, robust, and calibrated. However, heavy-tails pose a challenge to this mandate, especially since real world data is often imbalanced and marginalized subgroups tend to be underrepresented. To move toward safer systems, we present two studies on fair pre-processing and ensemble learning, respectively. We show that fair pre-processing comes with a fairness-robustness-calibration tradeoff, and we present a novel adaptive sampling algorithm to overcome this tradeoff. Furthermore, we demonstrate that ensemble learning on its own increases the fairness, robustness, and calibration of machine learning models. The adaptive sampling algorithm and ensemble learning present opportunities for practitioners to overcome this tradeoff in practice.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Concentration Inequalities for Dependent Random&#13;
Variables on Bayesian Networks</title>
<link href="https://hdl.handle.net/1721.1/151669" rel="alternate"/>
<author>
<name>Yao, Rui</name>
</author>
<id>https://hdl.handle.net/1721.1/151669</id>
<updated>2023-08-01T03:10:40Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Concentration Inequalities for Dependent Random&#13;
Variables on Bayesian Networks
Yao, Rui
The thesis presents a theoretical study of the concentration results for the function defined on the random variables on a Bayesian Network. In this work, we provide several concentration inequality results under the assumption that the function is Lipshitz or bounded difference. In addition, we illustrate about the concentration of the maximum likelihood estimator of some learning models. We also show the optimality of certain results and the comparison to the results in other relevant literature.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Halide in Molecular Dynamics</title>
<link href="https://hdl.handle.net/1721.1/151668" rel="alternate"/>
<author>
<name>Gayle Jr., Ricardo</name>
</author>
<id>https://hdl.handle.net/1721.1/151668</id>
<updated>2023-08-01T03:35:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Halide in Molecular Dynamics
Gayle Jr., Ricardo
In many fields, especially biology and chemistry, it is important to understand how a collection of particles will interact with each other over some period of time. If only managing a system of two particles, it is simple enough to calculate the final positions of the atoms given their properties and the outside forces placed upon them. However, it is often the case that the system’s size is several magnitudes larger; therefore, the task is handed off to computers and simulators.&#13;
&#13;
Molecular dynamics, or MD, simulations tend to be extremely expensive, taking several weeks to compute less than a second’s worth of real time. Two significant reasons MD simulations are time intensive are due to the complex loop structures and math required to observe each time step. More tools and research are constantly being developed to increase performance of these simulations.&#13;
&#13;
In this thesis we introduce a tool from the image processing domain, Halide, and argue that Halide is a qualified candidate to efficiently implement MD simulations in the future. We rewrote a potential into Halide to achieve only a 20% slow down serially, which we are certain can reach parity with minimal changes to the code, and over 300% speed up when running in parallel. While it was challenging beginning to work with Halide and its limitations, we are still able to accomplish this performance and versatility writing 47% less code. Halide also makes the transformation to parallel scheduling trivial, whereas this is not the case in the original implementation. Halide was not able to represent all of the loop structures we wanted; however, we also suggest several additions and changes to Halide to make it more suitable to MD.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Predicting Chemical Reactions at the MechanisticLevel through Deep Reinforcement Learning</title>
<link href="https://hdl.handle.net/1721.1/151666" rel="alternate"/>
<author>
<name>Jin, Edward H.</name>
</author>
<id>https://hdl.handle.net/1721.1/151666</id>
<updated>2023-08-01T03:52:04Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Predicting Chemical Reactions at the MechanisticLevel through Deep Reinforcement Learning
Jin, Edward H.
Reaction prediction is a fundamental problem in chemistry. Previous work has mostly targeted chemical product prediction only, but did not elucidate any mechanisms or elementary steps from which the reaction proceeds. Here, we attempt to predict chemical mechanisms via deep reinforcement learning.&#13;
&#13;
We first define a new type of graph molecular representation that can better keep track of electron flow and can be generalized to non-traditional bonding, such as 3- center-4-electron bonds. We then define a molecular environment Markov Decision Process (MDP) that codifies the allowed mechanistic steps and evaluates them by utilizing a thermodynamic energy oracle as the reward function. To solve this environment, we build a graph neural network-based policy and value network, where the policy network is first pre-trained on an open database of elementary radical reactions (RMechDB). Then, we use proximal policy optimization (PPO) to fine-tune the model and predict reasonable reaction mechanisms for two case studies: a radical oxidation of the terpene limonene, and a radical cyclization cascade in the synthesis of hirsutene.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Efficacy of Antibody and T cell Therapies for Highly&#13;
Mutable Viruses like Human Immunodeficiency</title>
<link href="https://hdl.handle.net/1721.1/151665" rel="alternate"/>
<author>
<name>Murugan, Pranav M.</name>
</author>
<id>https://hdl.handle.net/1721.1/151665</id>
<updated>2023-08-01T03:02:18Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Efficacy of Antibody and T cell Therapies for Highly&#13;
Mutable Viruses like Human Immunodeficiency
Murugan, Pranav M.
The isolation of broadly neutralizing antibodies (bnAbs) that can neutralize diverse strains of highly mutable viruses like human immunodeficiency virus (HIV) as well as identification of mutationally-constrained regions of the proteome that could be targeted by T cells has led to interest in passive immunotherapies and therapeutic vaccines as promising methods for treating chronic infection. However, the feasibility of creating a sufficiently powerful therapy remains uncertain. In this work, we develop a stochastic computational model of viral dynamics to help characterize the regimes where viral control or cure may be possible. We study the efficacy of either bnAb therapy or therapeutic vaccination that elicits T cell responses that target mutationally-constrained regions, as well as treatments that combine these two therapeutic modalities. Our results show that combination therapy has the best chance of maintaining viral control or achieving a cure. this is because administering combinations of bnAbs with broad coverage of viral strains for a sufficiently long time can potentially clear rare strains from the latent reservoir which are likely to escape T cell responses resulting in viral rebound. We also describe a strong relation between the outcome of treatment and the diversity of the reservoir of latently infected cells, which suggest that the best candidates for immunotherapy are those who started antiretroviral therapy shortly after infection. Importantly, we find that cure is likely to be a rare outcome, and that the average time to cure is long and independent of therapeutic modality as it depends on the rate of activation of the latent reservoir. Our results will help guide the design of new therapeutics, and provide a platform for future computational screening of of the efficacy of new treatment regimes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Proximal Gradient Algorithms for Gaussian Variational Inference:Optimization in the Bures–Wasserstein Space</title>
<link href="https://hdl.handle.net/1721.1/151664" rel="alternate"/>
<author>
<name>Diao, Michael Ziyang</name>
</author>
<id>https://hdl.handle.net/1721.1/151664</id>
<updated>2023-08-01T04:13:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Proximal Gradient Algorithms for Gaussian Variational Inference:Optimization in the Bures–Wasserstein Space
Diao, Michael Ziyang
Variational inference (VI) seeks to approximate a target distribution π by an element of a tractable family of distributions. Of key interest in statistics and machine learning is Gaussian VI, which approximates π by minimizing the Kullback–Leibler (KL) divergence to π over the space of Gaussians. In this work, we develop the (Stochastic) Forward-Backward Gaussian Variational Inference (FB–GVI) algorithm to solve Gaussian VI. Our approach exploits the composite structure of the KL divergence, which can be written as the sum of a smooth term (the potential) and a non-smooth term (the entropy) over the Bures–Wasserstein (BW) space of Gaussians endowed with the Wasserstein distance. For our proposed algorithm, we obtain state-of-the-art convergence guarantees when π is log-smooth and log-concave, as well as the first convergence guarantees to first-order stationary solutions when π is only log-smooth. Additionally, in the setting where the potential admits a representation as the average of many smooth component functionals, we develop and analyze a variance-reduced extension to (Stochastic) FB–GVI with improved complexity guarantees.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Deep Learning MRI-based Model for Prediction of&#13;
Clinically Significant Prostate Cancer</title>
<link href="https://hdl.handle.net/1721.1/151663" rel="alternate"/>
<author>
<name>Yang, Janice</name>
</author>
<id>https://hdl.handle.net/1721.1/151663</id>
<updated>2023-08-01T03:59:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Deep Learning MRI-based Model for Prediction of&#13;
Clinically Significant Prostate Cancer
Yang, Janice
Prostate cancer is one of the leading causes of death for men globally, despite many men being diagnosed with indolent tumors that do not warrant treatment. Increasingly, magnetic resonance imaging (MRI) is being used as a risk assessment tool, before more invasive prostate biopsies are performed for patients at suspicion of prostate cancer. We hypothesize that we can train a deep learning model that combines multi-parametric MRI images with clinical factors to accurately predict patient risk of developing clinically significant prostate cancer. We train an image model and combined image and clinical factors model on a set of 9391 MRIs from the Massachusetts General Brigham (MGB) hospital system, which achieved an area under the receiver-operator curve (AUROC) of 0.80 and 0.84, respectively, for 1-year prediction of clinically significant prostate cancer, surpassing current human baselines and existing risk models’ performance.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Time-Optimal Re-planning of Quadrotor Trajectories</title>
<link href="https://hdl.handle.net/1721.1/151661" rel="alternate"/>
<author>
<name>Wang, Geoffrey</name>
</author>
<id>https://hdl.handle.net/1721.1/151661</id>
<updated>2023-08-01T03:34:15Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Time-Optimal Re-planning of Quadrotor Trajectories
Wang, Geoffrey
With the rise of quadrotor drones in recent years, the research and development of time-optimal trajectory planners are pushing the boundaries. They now not only exploit the full dynamics of the drone to generate aggressive trajectories but also have runtimes that allow them to generate plans in near real-time.&#13;
&#13;
This work extends current state-of-the-art time-optimal quadrotor trajectory planners to allow for on-the-fly trajectory re-planning. Given new waypoints and a previous trajectory, the planner is able to generate an updated trajectory while maintaining time optimality. &#13;
&#13;
Because the planner leverages a learned sequence to sequence neural network model, it is able to generate trajectories magnitudes faster than optimization based approaches. This work then takes it one step further and optimizes the planner using a compiled real time inference library (NVIDIA TensorRT). The optimized planner is demonstrated to provide a 14.84 times increase in throughput and over 95% reduction in latency. The increase in throughput can be translated to better efficiency, and the reduction in latency is critical for trajectory re-planning where the drone is flying an active trajectory. Both improvements push it one step closer to running the planner onboard drones themselves. &#13;
&#13;
Although most experiments were conducted on desktop class hardware, mobile chips like the NVIDIA Jetson AGX Orin were also tested to mimic the class of hardware that could be flown onboard drones.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>How early can we average Neural Networks?</title>
<link href="https://hdl.handle.net/1721.1/151660" rel="alternate"/>
<author>
<name>Nasimov, Umarbek</name>
</author>
<id>https://hdl.handle.net/1721.1/151660</id>
<updated>2023-08-01T03:01:44Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">How early can we average Neural Networks?
Nasimov, Umarbek
There is a recurring observation in deep learning that neural networks can be combined simply with arithmetic averages over their parameters. This observation has led to many new research directions in model ensembling, meta-learning, federated learning, and optimization. We investigate the evolution of this phenomenon during the training trajectory of neural network models initialized from a common set of parameters (parent). Surprisingly, the benefit of averaging the parameters persists over long child trajectories from parent parameters with minimal training. Furthermore, we find that the parent can be merged with a single child with significant improvement in both training and test loss. Through analysis of the loss landscape, we find that the loss becomes sufficiently convex early on in training, and, as a consequence, models obtained by averaging multiple children often outperform any individual child.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unsupervised Phonetic Category Learning from&#13;
Audio and Visual Input</title>
<link href="https://hdl.handle.net/1721.1/151659" rel="alternate"/>
<author>
<name>Zhi, Sophia</name>
</author>
<id>https://hdl.handle.net/1721.1/151659</id>
<updated>2023-08-01T03:10:17Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unsupervised Phonetic Category Learning from&#13;
Audio and Visual Input
Zhi, Sophia
Understanding how children learn the phonetic categories of their native language is an open area of research in cognitive science and child language development. However, despite experimental evidence that phonetic processing is very often a multimodal phenomenon (involving both auditory and visual cues), computational research has primarily modeled phonetic category learning as a function of only auditory input. In this thesis, I investigate whether multimodal information benefits phonetic category learning under a clustering model. Due to the lack of an appropriate dataset, I also introduce a method for creating a high-quality dataset of synthetic videos of speakers’ faces for an existing audio corpus. This model trained and tested on audiovisual data achieves up to a 9.1% improvement on a phoneme discrimination battery over the random baseline compared to a model trained and tested on only audio data. The audiovisual model also outperforms the audio model by up to 4.7% over the baseline when both are tested on audio-only data, suggesting that visual information guides the learner towards better clusters. Further analysis indicates that visual information benefits most, but not all, phonemic contrasts. In follow-up analyses, I investigate the learned audiovisual clusters and their relationship to auditory gestures and phones, finding that the clusters capture a unit of speech smaller than phonemes. This work demonstrates the benefit of visual information to a computational model of phonetic category learning, suggesting that children may benefit substantively by using visual cues while learning phonetic categories.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Quantitative assessment of the frictional ignition resistance of&#13;
metals in high-pressure oxygen</title>
<link href="https://hdl.handle.net/1721.1/151658" rel="alternate"/>
<author>
<name>Garcia Jimenez, Andres</name>
</author>
<id>https://hdl.handle.net/1721.1/151658</id>
<updated>2023-08-01T03:44:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Quantitative assessment of the frictional ignition resistance of&#13;
metals in high-pressure oxygen
Garcia Jimenez, Andres
In this work, we developed a material index for selecting alloys resistant to frictional ignition in high-pressure oxygen environments. A previous ignition-resistance metric proposed by NASA WSTF varies strongly and unpredictably with test conditions, thus limiting its usefulness. The material index developed here incorporates key material properties that influence ignition behaviors, including friction coefficient, ignition temperature, and thermal effusivity. Finite element simulations were used to compute ignition temperatures for 15 alloys based on published frictional ignition data from NASA White Sands Testing Facility (WSTF). These values were used with the material index to construct property diagrams for ranking intrinsic frictional ignition resistance. The results demonstrate that nickel-based superalloys with low iron content are less likely to ignite under frictional heating than ferrous alloys and nickel-based superalloys with high content iron. The material index is then used to predict material performance outside of the test conditions, highlighting the effect of ambient temperature on ignition resistance. We conclude by developing an empirical relation between ignition temperature and enthalpy of oxidation which can guide the design of new ignition-resistant alloys.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Leveraging Basis Alignment to create a Generalized&#13;
Multi-Relational Graph Convolution Network in the&#13;
Federated Setting</title>
<link href="https://hdl.handle.net/1721.1/151657" rel="alternate"/>
<author>
<name>Ramirez, Nicholas</name>
</author>
<id>https://hdl.handle.net/1721.1/151657</id>
<updated>2023-08-01T04:01:06Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Leveraging Basis Alignment to create a Generalized&#13;
Multi-Relational Graph Convolution Network in the&#13;
Federated Setting
Ramirez, Nicholas
Knowledge graphs have seen a significant rise in popularity and usage in recent years with many real-world applications taking advantage of their ability to model interlinked data easily. In general, many institutions maintain their own knowledge graphs, however these graphs tend to suffer from incompleteness. This is due to two main reasons: knowledge is naturally distributed across institutions and institutions are unable to share sensitive data. With this in mind, federated learning appears to be a promising solution to this problem as it enables clients to develop a shared global model without sharing any data. This thesis aims to solve the knowledge graph completion problem by introducing a federated learning protocol for the state-of-the-art Knowledge Embedding Based Graph Convolutional Network (KE-GCN) [51]. KEGCN was chosen for it’s unification of multiple graph convolutional networks and it’s ability to provide as much flexibility as possible for clients. As a result, my federated protocol, Fed-KE-GCN, is focused on data privacy and flexibility. In addition to Fed-KE-GCN, this thesis empirically shows that a common approach for differential privacy for deep learning, Differentially Private Stochastic Gradient Descent (DP-SGD) [2], is not viable in this domain due to the nature of graph data and the internal framework of Graph Convolutional Networks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Optimizing the Allocation of Capital&#13;
Among Offensive Positions in the NFL</title>
<link href="https://hdl.handle.net/1721.1/151656" rel="alternate"/>
<author>
<name>Calvetti Jr., Paul G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151656</id>
<updated>2023-08-01T03:03:00Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Optimizing the Allocation of Capital&#13;
Among Offensive Positions in the NFL
Calvetti Jr., Paul G.
Building a successful National Football League (NFL) team is a challenging task, requiring front offices to balance player selection and compensation while operating under a salary cap constraint. The salary cap represents the maximum amount a team can spend on player salaries in a given season. Effective team construction entails strategic allocation of resources across different positions to maximize performance within this budget. This paper focuses on the critical aspect of allocating salary cap resources among offensive positions to maximize team success. We introduce a novel model that considers the interplay between players at different offensive positions, as well as the variations in salaries and performance levels observed between players under rookie and veteran contracts. By framing the allocation challenge as a constrained optimization problem, we aim to help teams maximize their points per game while staying within the salary cap limit. Our model’s predictions enable us to identify the optimal distribution of resources across offensive positions, providing valuable insights for NFL front offices as they seek to allocate their salary cap to achieve maximum offensive performance and increase their chances of success on the field.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Visual Language Pretrained Multiple Instance Zero-Shot&#13;
Transfer for Histopathology Images</title>
<link href="https://hdl.handle.net/1721.1/151651" rel="alternate"/>
<author>
<name>Lu, Ming Yang (Max)</name>
</author>
<id>https://hdl.handle.net/1721.1/151651</id>
<updated>2023-08-01T04:16:57Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Visual Language Pretrained Multiple Instance Zero-Shot&#13;
Transfer for Histopathology Images
Lu, Ming Yang (Max)
Contrastive visual language pretraining has emerged as a powerful method for either training new language-aware image encoders or augmenting existing pretrained models with zero-shot visual recognition capabilities. However, existing works typically train on large datasets of image-text pairs and have been designed to perform downstream tasks involving only small to medium sized-images, neither of which are applicable to the emerging field of computational pathology where there are limited publicly available paired image-text datasets and each image can span up to 100,000 x 100,000 pixels. In this paper we present MI-Zero, a simple and intuitive framework for unleashing the zero-shot transfer capabilities of contrastively aligned image and text models on gigapixel histopathology whole slide images, enabling multiple downstream diagnostic tasks to be carried out by pretrained encoders without requiring any additional labels. MI-Zero reformulates zero-shot transfer under the framework of multiple instance learning to overcome the computational challenge of inference on extremely large images. We used over 550k pathology reports and other available in-domain text corpora to pretrain our text encoder. By effectively leveraging strong pretrained encoders, our best model pretrained on over 33k histopathology image-caption pairs achieves an average median zero-shot accuracy of 70.2% across three different real-world cancer subtyping tasks.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Dynamic Neural Network for Efficient Video Recognition</title>
<link href="https://hdl.handle.net/1721.1/151649" rel="alternate"/>
<author>
<name>Pan, Bowen</name>
</author>
<id>https://hdl.handle.net/1721.1/151649</id>
<updated>2023-08-01T03:50:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Dynamic Neural Network for Efficient Video Recognition
Pan, Bowen
Recognizing real-world videos is a challenging task that requires the use of deep learning models. These models, however, require extensive computational resources to achieve robust recognition. One of the main challenges when dealing with real-world videos is the high correlation of information across frames. This results in redundancy in either temporal or spatial feature maps of the models, or both. The amount of redundancy largely depends on the dynamics and events captured in the video. For example, static videos typically have more temporal redundancy, while videos focusing on objects tend to have more channel redundancy.&#13;
&#13;
To address this challenge, we propose a novel approach that reduces redundancy by using an input-dependent policy to determine the necessary features for both temporal and channel dimensions. By doing so, we can identify the most relevant information for each frame, thus reducing the overall computational load. After computing the necessary features, we reconstruct the remaining redundant features from those using cheap linear operations. This not only reduces the computational cost of the model but also keeps the capacity of the original model intact.&#13;
&#13;
Moreover, our proposed approach has the potential to improve the accuracy of real-world video recognition by reducing overfitting caused by the redundancy of information across frames. By focusing on the most relevant information, our model can better capture the unique characteristics of each video, resulting in more accurate predictions. Overall, our approach represents a significant step forward in the field of real-world video recognition and has the potential to enable the development of more efficient and accurate deep learning models for this task.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Annealing Cryogenically Irradiated High TemperatureSuperconductors with Current Pulses</title>
<link href="https://hdl.handle.net/1721.1/151648" rel="alternate"/>
<author>
<name>Fisher, Zoe Lilah</name>
</author>
<id>https://hdl.handle.net/1721.1/151648</id>
<updated>2023-08-01T03:43:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Annealing Cryogenically Irradiated High TemperatureSuperconductors with Current Pulses
Fisher, Zoe Lilah
Tokamak fusion power plants rely on electrogmanets engineered from high temperature superconductors (HTS) made of Rare Earth Barium Copper Oxide (REBCO) to confine a thermonuclear grade plasma. The HTS performance must be predictable despite the radiation damage caused by fast neutrons from fusion reactions that damage to the REBCO microstructure, decreasing the magnet’s critical current. This lowers the reactor’s achievable magnetic field–and therefore performance. The damage, however, is not necessarily permanent. By applying a short current pulse above the critical current of the coated conductor, resistive heating briefly raises the REBCO’s temperature well above that of the surrounding cryogenic environment. This process,&#13;
called annealing, heals defects and recovers some of the performance losses. Magnets are the limiting factor for tokamak lifetimes, therefore pulse annealing could dramatically increase the economical viability of fusion energy by reducing shutdown frequency and duration.&#13;
&#13;
This experiment focuses on sending 400A pulses through an irradiated HTS tape to identify the optimal duration for critical current recovery. Using a cryogenic proton irradiation facility capable of applying current pulses as high as 2000A and as short as 100 ns, we found that a 400A pulse can display up to 400% critical current recovery with respect to the post-irradiation critical current value. The optimal length for this current pulse is 5.5 ms, which results in a maximum calculated temperature of 630K in the REBCO microstructure. Future works will pursue measuring (rather than calculating) the temperature in the REBCO microstructure and parameterizing the maximum critical current recovery at different pulse amplitudes.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Market Mechanisms for Service Provider Operations in Advanced Air Mobility</title>
<link href="https://hdl.handle.net/1721.1/151647" rel="alternate"/>
<author>
<name>Qin, Victor L.</name>
</author>
<id>https://hdl.handle.net/1721.1/151647</id>
<updated>2023-08-01T04:14:07Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Market Mechanisms for Service Provider Operations in Advanced Air Mobility
Qin, Victor L.
The proliferation of advanced air mobility (AAM) flights in the form of vertical take-off and landing aircraft (VTOL) and uncrewed aircraft systems (UAS) in the near future will require a new air traffic management system adapted for on-demand flights flying at low altitudes and far from existing airports and aviation hubs. The FAA has proposed UAS traffic maangement (UTM) and urban air mobility (UAM) as two concepts of operations for AAM, where private service providers (SPs) will be responsible for managing these novel forms of air traffic alongside but independently from existing air traffic control services. The roles and characteristics of these new SPs are still not well defined today.&#13;
&#13;
In this work, we propose possible methods that can fulfill the concepts proposed. First, we present cost-aware prioritization methods based on the second price auction for air traffic management protocols for use in an SP's internal operations. Next, we show a Shapley value profit-sharing mechanism to incentivize cooperation cooperation in efficiently routing flights between SPs. Finally, we extend the Shapley value framework to accommodate multiple SPs in the same region of airspace, and study how the combination of airspace structure, traffic demand, and sector allocation leads to differences in profit earned between SPs. We conclude with future directions for studying and building service providers in the AAM context.&#13;
&#13;
Keywords: advanced air mobility, Shapley value, service providers, market mechanisms
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Impact Analysis and Design Development for Air-Dropped Antarctic Seismo-Geodetic Ice Penetrator</title>
<link href="https://hdl.handle.net/1721.1/151646" rel="alternate"/>
<author>
<name>Miller, Alex S.</name>
</author>
<id>https://hdl.handle.net/1721.1/151646</id>
<updated>2023-08-01T03:03:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Impact Analysis and Design Development for Air-Dropped Antarctic Seismo-Geodetic Ice Penetrator
Miller, Alex S.
Existing measurement tools for ice shelves and other glaciated regions have limited capability to measure dynamic events in remote areas. The Seismo-Geodetic Ice Penetrator (SGIP) offers a method for rapid deployment of a broadband seismometer and Global Navigation Satellite System (GNSS) positioning system designed to sense ice shelf resonant forcings caused by ocean gravity waves and atmospheric waves. Additionally, SGIP will track seismic indications of calving and rifting, facilitating better estimates of sea level rise. During operation, SGIP is dropped from an aerial vehicle, reaching a terminal velocity of 42 ms⁻¹; during impact with the snowpack surface SGIP experiences an average acceleration of approximately 500 ms⁻². Upon impact, a fore-body section separates from the upper aft-body "flare" section and continues several meters into the ice shelf, while the aft-body remains at the surface with a set of communications antennas. The SGIP platform is compared to previously envisioned and tested penetrator systems. Impact modeling of SGIP into glacial firn is detailed, with a focus on fast simulation run-times for design exploration. Designs of snow spikes and a rigid antenna mast are detailed, analyzed and tested. Results from a full-scale prototype hardware test in Juneau, Alaska are discussed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Sybil: Predicting Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography</title>
<link href="https://hdl.handle.net/1721.1/151645" rel="alternate"/>
<author>
<name>Mikhael, Peter G.</name>
</author>
<id>https://hdl.handle.net/1721.1/151645</id>
<updated>2023-08-01T03:02:03Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Sybil: Predicting Future Lung Cancer Risk From a Single Low-Dose Chest Computed Tomography
Mikhael, Peter G.
Low-dose computed tomography (LDCT) for Jung cancer screening is effective, though most eligible people are not being screened. Tools that provide personalized future cancer risk assessment could focus approaches toward those most likely to benefit. We hypothesize that a deep learning model assessing the entire volumetric LDCT data could be built to predict individual risk without requiring additional demographic or clinical data. We develop a model called Sybil using LDCTh from the National Lung Screening 'Trial (NLST). Sybil requires only one LDCT and does not require clinical data or radiologist annotations; it can run in real-time in the background on a radiol­ogy reading station. Sybil is validated on three independent datasets: a held-out set of 6,282 LDCTs from NLST participants, 8,821 LDCTs from Massachusetts General Hospital (MGH) and 12,280 LDCTs from Chang Cung Memorial Hospital (CGMH, which included people with a range of smoking history including non-smokers). Sybil achieves areas under the receiver-operator curve for Jung cancer prediction at 1-year of 0.92 (95% CI 0.88, 0.95) on NLST, 0.86 (95% CI 0.82, 0.90) on MGH and 0.94 (95% CI 0.91, 1.00) on CGMH external validation sets. Concordance indices over six years were 0.75 (95% CI 0.72, 0.78), 0.81 (95% CI 0.77, 0.85), and 0.80 (95% CI 0.75, 0.86) for NLST, MGH, and CGMH, respectively. The model is publicly available at https://github.com/reginabarzilaygroup/Sybil.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Unleashing the Power of Generative AI: The Race for Advancement and the Global Ramifications</title>
<link href="https://hdl.handle.net/1721.1/151643" rel="alternate"/>
<author>
<name>Chiang, Ian</name>
</author>
<id>https://hdl.handle.net/1721.1/151643</id>
<updated>2023-08-01T03:29:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Unleashing the Power of Generative AI: The Race for Advancement and the Global Ramifications
Chiang, Ian
Generative AI, including language models like ChatGPT, has had a significant impact on a wide range of industries and applications. It has created new opportunities in industries like content creation, marketing, and design thanks to its capacity to produce high-quality text, images, and other types of media. The increased use of generative AI has, however, also sparked a global arms race for supremacy in the area.&#13;
&#13;
Concerns have been raised about the potential misuse of generative AI technology, including the production of fake news, propaganda, and deepfakes, as countries and corporations compete for control over it. The creation of highly sophisticated generative AI systems has also sparked discussions about the moral and societal ramifications of making machines that can generate content on their own with little to no human input.&#13;
&#13;
Despite these worries, generative AI will probably continue to have a positive social impact in the years to come. The potential for industries to undergo a revolution and for our interactions with media and information to change as technology becomes more widely available and sophisticated. As a result, it is critical that we keep a close eye on its development and application while also attempting to address any potential ethical and societal issues that may come up.&#13;
&#13;
Through this research, I will analyze the holistic view of generative AI and raise up concerns about the effect of AI growth &amp; global repercussions to such tension in the race for superior generative AI. Additionally, I will take parallels from past disruptive technologies to forecast the outcome of generative AI’s abrupt changes to society.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Analysis and Comparison&#13;
of the Creation of University Spin-off Startups in Deep Tech&#13;
between the United States and Japan</title>
<link href="https://hdl.handle.net/1721.1/151642" rel="alternate"/>
<author>
<name>Ito, Masumi</name>
</author>
<id>https://hdl.handle.net/1721.1/151642</id>
<updated>2023-08-01T04:21:55Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Analysis and Comparison&#13;
of the Creation of University Spin-off Startups in Deep Tech&#13;
between the United States and Japan
Ito, Masumi
Research-based universities have played a significant role in the economic growth of nations, particularly in the United States, where companies originating from these universities have generated substantial employment opportunities and revenue. &#13;
&#13;
There exists a substantial disparity in the number of spin-off companies created from these universities between the United States and Japan. Although Japan is not far behind the United States in terms of patent numbers, it significantly lags behind in successfully commercializing research outcomes through the establishment of startups.&#13;
&#13;
Therefore, this thesis focuses on the Massachusetts Institute of Technology (MIT), a leading institution in spin-off creation in the United States, and the University of Tokyo, the leading institution in Japan. The objective is to investigate how their university-based ecosystems, including university-supported venture capital initiatives and on-campus entrepreneurship programs, influence the establishment of university spin-offs. The analysis is conducted through interviews and a literature review to examine the impact of these ecosystems on the formation of university spin-off startups. &#13;
&#13;
Many of the spin-off startups emerging from research-based universities fall under the category of "deep tech" companies, which are based on long-term research outcomes and require substantial investments and development time. Consequently, a funding gap referred to as the "valley of death" arises, presenting a unique financial challenge for entrepreneurs between research invention and commercialization. It is essential for entrepreneurs to overcome this funding gap, and thus, we also investigate how university spin-offs in Japan and the United States make fundraising choices to bridge the capital gap.  &#13;
&#13;
By conducting these surveys, we aim to gain insights into the effectiveness of university-affiliated venture capital firms, university spin-off startups, and the overall university ecosystem.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Techno-Economic Analysis of Hydrogen, Electric, and Diesel Fuel in Medium- and Heavy-Duty Transportation Applications</title>
<link href="https://hdl.handle.net/1721.1/151641" rel="alternate"/>
<author>
<name>Kennington, Lindsey</name>
</author>
<id>https://hdl.handle.net/1721.1/151641</id>
<updated>2023-08-01T03:28:26Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Techno-Economic Analysis of Hydrogen, Electric, and Diesel Fuel in Medium- and Heavy-Duty Transportation Applications
Kennington, Lindsey
This paper presents a techno-economic analysis for three distinct vehicle drivetrains (hydrogen fuel-cell vehicles (FCVs), battery-electric vehicles (BEVs), and diesel vehicles (ICE-D)) across a variety of applications in the medium- and heavy-duty (MD/HD) transportation market. The primary basis for evaluating each drivetrain is vehicle total cost of ownership (TCO). This paper analyzes the primary cost categories that contribute to TCO, capital and operational costs, as well as incentives and subsidies. The study also addresses the external social costs of FCVs and BEVs and provides a risk analysis for each zero-emissions (ZEV) drivetrain.&#13;
&#13;
TCO analyses are developed across a variety of medium- and heavy-duty fleet applications. These fleet applications include Long-Haul Trucking (Class 8), Short-Haul Trucking (Class 8), Parcel Delivery (Class 4), Tipper Dump Trucks (Class 6), Refuse (Garbage) Trucks (Class 6), Forklifts (Class 3), School Buses (Class 6), Transit Buses (Class 7). Certain application segments are modeled under multiple scenarios to account for key operational differences, such as volume vs. weight limited fleet applications, or single vs. multi-shift operational schedules. TCO financial modeling for each drivetrain-application-scenario pairing illuminates which ZEV is a more natural fit within the MD/HD transportation fleet market segment. The results of the study demonstrate that the TCO of FCVs and BEVs are heavily influenced by several factors such as the initial purchase price, the price of hydrogen fuel, the cost of vehicle operator downtime, the vehicle charging rate, and the vehicle rated payload.&#13;
&#13;
This study concludes that FCVs are a natural fit for long- and short-haul trucking applications that operate under weight-limited operations or follow a multi-shift schedule. However, there are current infrastructure limitations for this market. Most notably, hydrogen fuel station corridors do not currently exist in the United States outside of California. This infrastructure limitation illuminates the key challenge to the success of a future hydrogen economy: potential hydrogen economy end-users want the guarantee of significant hydrogen infrastructure developments before committing to hydrogen-powered equipment – but the funding required to support the hydrogen infrastructure upgrades will only be secured once future hydrogen end-users and customers are secured. Therefore, it is recommended that stakeholders and policymakers interested in developing the hydrogen economy and future hydrogen fuel cell markets push for infrastructure development by securing partnerships with fleet owners in the specific application segments where FCVs outperform BEVs from a TCO perspective.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The MIT-IBM CloudSec 16: A Cloud Cybersecurity Benchmarking Framework</title>
<link href="https://hdl.handle.net/1721.1/151640" rel="alternate"/>
<author>
<name>Lewke, Damien</name>
</author>
<id>https://hdl.handle.net/1721.1/151640</id>
<updated>2023-08-01T03:50:53Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The MIT-IBM CloudSec 16: A Cloud Cybersecurity Benchmarking Framework
Lewke, Damien
This paper proposes a novel cloud security benchmarking framework and scoring system to improve cyber risk management. Cyber risk management is challenging and has become even more difficult as organizations digitally transform their business and IT from on-premises environments to cloud infrastructure. Threats proliferate as organizations’ attack surfaces expand due to shadow IT, software supply chain security, outsourced networking, and virtualization. Existing cyber risk management frameworks and controls are too exhaustive or generic and provide no means for organizations to assess their cyber risk against their peers. The MIT-IBM CloudSec 16 developed in this paper is a new security benchmarking framework and scoring system built specifically for cloud deployments in the financial service sector. When paired with MIT’s SCRAM secure computation platform, the MIT-IBM CloudSec 16 can provide an overview of cloud security in the financial service sector and enable organizations to and remediate areas of relative weakness.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Additively Manufactured, Multi-Langmuir Probe Plasma Sensing Device for Use on CubeSats</title>
<link href="https://hdl.handle.net/1721.1/151639" rel="alternate"/>
<author>
<name>Bigelow, Zoey</name>
</author>
<id>https://hdl.handle.net/1721.1/151639</id>
<updated>2023-08-01T03:49:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Additively Manufactured, Multi-Langmuir Probe Plasma Sensing Device for Use on CubeSats
Bigelow, Zoey
This thesis presents a Langmuir probe (LP) sensor array for CubeSat ionospheric plasma diagnostics. The ionosphere is a critical layer of Earth's atmosphere consisting entirely of plasma, and reliable in-situ measurements made on satellites orbiting within the ionosphere are necessary for better understanding its properties. LPs are an ideal choice for CubeSats due to their simplicity, versatility, and minimal maintenance requirements.&#13;
&#13;
This thesis focuses on the development and characterization of a novel LP sensor array that employs three types of LP arrangements (single, dual, and triple) to measure plasma properties. This includes the development of low-power electronics to run the multi-LP device and the design of 3D-printed housing to push the lower bounds of device size and electrode spacing. The designs were rigorously tested in a helicon plasma chamber.&#13;
&#13;
The resulting LP sensor array is the first of its kind, allowing for the development of better and cheaper CubeSat sensors. The multi-LP device is designed to draw low power and is intended to be cost-effectively manufactured via rapid prototyping techniques. This makes it an ideal solution for CubeSats that is compatible with in-space manufacturing. This device provides critical data to help us better understand the thermosphere's plasma and its impact on climate change.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Design and Development of Stability and Control Systems for Small, Deployable Aircraft</title>
<link href="https://hdl.handle.net/1721.1/151638" rel="alternate"/>
<author>
<name>Gaubatz, Julia C.</name>
</author>
<id>https://hdl.handle.net/1721.1/151638</id>
<updated>2023-08-01T04:20:41Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Design and Development of Stability and Control Systems for Small, Deployable Aircraft
Gaubatz, Julia C.
This thesis presents the development of the tail system for Firefly, a kilogram-scale, transonic, rocket-powered, deployable UAV. The propulsion system, vehicle size, and stowability requirements present challenges in designing control surfaces with adequate stability and control performance. To satisfy the stowability requirements, the tail was designed with an oblique hinge in which the deployment axis doubles as the control-surface actuation axis. Actuation mechanics, deployment spring sizing, and other mechanical details are also presented. To model the stability and control effects of the large oblique motion of the tail's control surfaces, a custom pre-processor was developed to deflect them for vortex lattice computations. The accuracy of this method is compared against the conventional "control" vector method in subsequent testing. Wind tunnel testing was performed to evaluate longitudinal stability and controllability. Unpowered flight tests were conducted to collect flight data and test the mechanical functionality of multiple tail designs. The accuracy of the vortex lattice aero-control predictions are discussed and recommendations are made in regards to the applicability of the oblique surface deflection pre-processor.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Two case studies on indoor air quality in New York&#13;
City decarbonized affordable housing</title>
<link href="https://hdl.handle.net/1721.1/151636" rel="alternate"/>
<author>
<name>Morales, Manuel</name>
</author>
<id>https://hdl.handle.net/1721.1/151636</id>
<updated>2023-08-01T03:57:11Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Two case studies on indoor air quality in New York&#13;
City decarbonized affordable housing
Morales, Manuel
To mitigate the effects of climate change, building decarbonization and energy efficiency measures have expanded in scope. At the same time, interest has grown in how these changes affect indoor air quality (IAQ) and thus personal health.&#13;
&#13;
This thesis analyzes the concentrations of gas pollutants and particulate matter (PM) within occupied apartments in two New York City affordable housing projects, which we will refer to as Bushwick and Woodlawn. At Bushwick, we explore how gas and PM concentrations are impacted by retrofits that decarbonize the building and increase its energy efficiency to meet passive house standards. At Woodlawn, we monitor PM in a new development built to passive house standards to observe how concentrations are impacted by occupancy and controlled changes to ventilation and filtration settings.&#13;
&#13;
Results at Bushwick were limited by the availability of data and confounding factors but indicated the potential for a retrofit to passive house standards to improve IAQ. PM and gas sensors were initially installed in four apartments, but only one apartment (Apt. D) maintained both of these sensors online throughout the study. In addition, one apartment (Apt. A) kept only the PM sensor online and another (Apt. B) kept only the gas sensor online. This ultimately allowed us to analyze changes in PM and gas concentrations in two apartments each. Of note, a few tenants in Apt. D who used to smoke in the unit moved out during the retrofit, so these changes confounded any effect of the retrofit on air pollution that we hoped to observe. We observed statistically significant decreases in most gas and PM pollutants across apartments following the retrofit. PM1 saw the most steep decreases in PM, with mean concentrations dropping 55% in Apt. A and 44% in Apt. D after the retrofit. Amongst gases, mean CO2 concentrations decreased by 62% in Apt. B and 45% in Apt. D. This decrease in air pollution resulted in greater compliance with Health Canada IAQ guidelines after the retrofit.&#13;
&#13;
Results at Woodlawn were supported by strong data collection for a year in nine apartment units. By observing air pollution before and after tenants moved in, we determined that occupancy had a statistically significant effect in increasing PM concentrations in all observed apartments. We also observed that the combined effect of increasing ventilation rates by 25% and using in-unit HEPA filters resulted in statistically significant decreases in PM concentrations across most units. Across all interventions in occupancy, ventilation, and filtration, PM2.5 and PM10 concentrations in all units fully complied with WHO ambient air quality guidelines. Furthermore, air pollution indoors was consistently lower than that outdoors, evidence that passive house construction can keep indoor air quality high and protect residents from outdoor air pollution.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Powderworld: A Platform for Understanding Generalization via Rich Task Distributions</title>
<link href="https://hdl.handle.net/1721.1/151635" rel="alternate"/>
<author>
<name>Frans, Kevin</name>
</author>
<id>https://hdl.handle.net/1721.1/151635</id>
<updated>2023-08-01T03:16:50Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Powderworld: A Platform for Understanding Generalization via Rich Task Distributions
Frans, Kevin
One of the grand challenges of reinforcement learning is the ability to generalize to new tasks. However, general agents require a set of rich, diverse tasks to train on. Designing a ‘foundation environment’ for such tasks is tricky – the ideal environment would support a range of emergent phenomena, an expressive task space, and fast runtime. To take a step towards addressing this research bottleneck, this work presents Powderworld, a lightweight yet expressive simulation environment running directly on the GPU. Within Powderworld, two motivating challenges are presented, one for world-modelling and one for reinforcement learning. Each contains hand-designed test tasks to examine generalization. Experiments indicate that increasing the environment’s complexity improves generalization for world models and certain reinforcement learning agents, yet may inhibit learning in high-variance environments. Powderworld aims to support the study of generalization by providing a source of diverse tasks arising from the same core rules.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Interpretable Modeling of Immunotherapy Response&#13;
Factors</title>
<link href="https://hdl.handle.net/1721.1/151634" rel="alternate"/>
<author>
<name>Ting, Britney A.</name>
</author>
<id>https://hdl.handle.net/1721.1/151634</id>
<updated>2023-08-01T03:43:25Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Interpretable Modeling of Immunotherapy Response&#13;
Factors
Ting, Britney A.
Immunotherapy, which treats cancer by either stimulating or suppressing the immune system, has been extraordinarily effective for some cancers, such as breast cancer and B-cell lymphoma. A type of immunotherapy, checkpoint inhibitors work by blocking the ability of cancer cells to evade immune system detection. However, not all patients respond to checkpoint inhibitors, even those with the same tumor types, and the complexity of biological networks and diversity of patients makes it difficult for clinicians to understand why a patient does not respond to treatment. This thesis integrates RNA and whole-exome seqeuencing (WES) data into an interpretable machine learning model and investigates genetic factors that may separate responders from nonresponders. We discovered that both data types contribute to response separation and that certain gene sets may be especially important factors for predicting response. Further analysis to elucidate how much individual genes contribute to significant gene sets and response needs to be performed.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Branch-and-Price for Prescriptive Contagion Analytics</title>
<link href="https://hdl.handle.net/1721.1/151633" rel="alternate"/>
<author>
<name>Ramé, Martin</name>
</author>
<id>https://hdl.handle.net/1721.1/151633</id>
<updated>2023-08-01T03:14:45Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Branch-and-Price for Prescriptive Contagion Analytics
Ramé, Martin
Contagion models are ubiquitous in epidemiology, social sciences, engineering, and management. This thesis formalizes prescriptive contagion analytics problems where a centralized decision-maker allocates shared resources across multiple segments of a population, each governed by contagion dynamics. We define four real-world problems under this umbrella: distributing vaccines, deploying vaccination centers, mitigating urban congestion, promoting online content, and combating drug addiction. Prescriptive contagion problems involve mixed-integer non-convex optimization models with constraints governed by ordinary differential equations, thus combining the challenges of combinatorial optimization, non-linear optimization, and continuous-time system dynamics. This thesis develops a branch-and-price methodology for these problems based on: (i) a set partitioning reformulation; (ii) a column generation decomposition; (iii) a novel state clustering algorithm for discrete-decision continuous-state dynamic programming; and (iv) a novel tri-partite branching scheme to circumvent non-linearities. Extensive experiments show that the algorithm scales to large and otherwise- intractable instances, significantly outperforming state-of-the-art benchmarks. Our methodology provides a novel decision-making tool to support resource allocation in contagion systems. In particular, its application can increase the effectiveness of vaccination campaigns by an estimated 50-70%, resulting in 12,000 extra saved lives over 12 weeks in a situation mirroring the COVID-19 pandemic.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Fault Tolerant Broadcast in Bandwidth-Constrained&#13;
Networks</title>
<link href="https://hdl.handle.net/1721.1/151630" rel="alternate"/>
<author>
<name>Kaklamanis, Ioannis</name>
</author>
<id>https://hdl.handle.net/1721.1/151630</id>
<updated>2023-08-01T03:28:36Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Fault Tolerant Broadcast in Bandwidth-Constrained&#13;
Networks
Kaklamanis, Ioannis
This thesis addresses the problem of achieving scalable fault-tolerant broadcast in networks with limited bandwidth. We begin by examining the limitations of leaderbased protocols, such as HotStuff, which suffer from a leader bottleneck and reduced system throughput as the number of servers increases. To mitigate this, we propose CodedBcaster and Coded HotStuff, a Byzantine Fault Tolerant (BFT) broadcast scheme based on erasure coding, demonstrating a significant improvement in throughput. We further explore the problem of optimal rate allocation in heterogeneous node-constrained networks and provide concrete theoretical results for determining the optimal system throughput rate. Additionally, we propose the MaxMin Rate Controller (MaxMin-RC) protocol as a feedback-based solution to optimize broadcast throughput in non-BFT settings, achieving close alignment with the optimal throughput rate. Through extensive simulations and evaluations, we demonstrate the effectiveness of our proposed solutions.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Implementing Secure Shared Memory for Side-Channel-Resistant Enclaves</title>
<link href="https://hdl.handle.net/1721.1/151629" rel="alternate"/>
<author>
<name>Gomez-Garcia, Miguel</name>
</author>
<id>https://hdl.handle.net/1721.1/151629</id>
<updated>2023-08-01T03:59:23Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Implementing Secure Shared Memory for Side-Channel-Resistant Enclaves
Gomez-Garcia, Miguel
With the rise in cloud computing, it has become more critical than ever for remote users to get strong security guarantees to secure sensitive computation they run on untrusted machines. Enclaves or Trusted Execution Environments (TEEs) are a powerful trusted computing primitive that can address this problem; through carefully co-designed hardware and software mechanisms, enclaves enforce strong isolation and integrity properties. While many enclave implementations already exist, most do not consider the threat of microarchitectural side channels and transient execution attacks. And although one academic proposal – MI6 – has addressed this stronger threat model, these security guarantees often come at a cost of a more limited capability, as well as performance overheads. As a result, no industrial hardware vendor has made any announcement to include these attacks in their threat model.&#13;
&#13;
This thesis presents research in improving the capabilities of side-channel-resistant enclaves through the addition of secure shared memory, providing a mechanism for enclave applications to communicate with outside processes while maintaining the same strong isolation security guarantees provided by MI6. This allows for the development of a wider range of enclave applications with a significant performance improvement compared to existing enclave communication mechanisms. We hope that this work will demonstrate that enclaves can maintain strong security properties while being able to run a wide range of expressive programs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Performing Actionable Evaluations of Sustainability&#13;
Investments</title>
<link href="https://hdl.handle.net/1721.1/151625" rel="alternate"/>
<author>
<name>Hopkins, Jacob</name>
</author>
<id>https://hdl.handle.net/1721.1/151625</id>
<updated>2023-08-01T03:19:34Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Performing Actionable Evaluations of Sustainability&#13;
Investments
Hopkins, Jacob
Businesses are rapidly pursuing investments in sustainable technologies to meet their climate goals, but performing actionable evaluations of technologies is difficult, especially in decentralized businesses. Actionable evaluations have high accuracy, high precision, and address uncertainty. Sustainable technologies are not well characterized and their expected performance is uncertain. For numerous reasons, approaches used in industry do not currently address these concerns. This research investigated tools to improve accuracy and precision and proposes a methodology to address uncertainty. The methodology includes a Monte-Carlo simulation tool and a method to assess data quality that address the concerns in traditional approaches. We believe this methodology can help decentralized businesses perform more actionable evaluations of sustainable technologies.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Modeling Passenger Electric Vehicle Charging Demand with Machine Learning Using Telematics Data and Temperature</title>
<link href="https://hdl.handle.net/1721.1/151624" rel="alternate"/>
<author>
<name>Barber, Adam</name>
</author>
<id>https://hdl.handle.net/1721.1/151624</id>
<updated>2023-08-01T03:20:02Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Modeling Passenger Electric Vehicle Charging Demand with Machine Learning Using Telematics Data and Temperature
Barber, Adam
Electric vehicles (EVs), with their potential to drastically reduce greenhouse gas emissions, pose a problem for energy distribution infrastructure which was not previously designed with hosting capacity capable of handling the additional demand generated by their mass adoption. Understanding when customers charge their EVs and how much energy they consume better enables electric utilities to provide more reliable and affordable energy to all customers while aiding the transition to clean transportation. The purpose of the research was to analyze passenger EV charging data from National Grid's Massachusetts EV Off-Peak Charging Program and determine whether generalizable and scalable machine learning models could be built to predict EV charging energy demand, and further determine the lowest possible geographic granularity of such models. This research was novel in its charge rate estimation methodology, normalization of charging energy on a per-vehicle basis, accounting for charging energy demand flowing into and out of the studied system, and the addition of ambient air temperature as a feature variable. Modeling employed supervised machine learning methods with random forests deemed optimal in terms of accuracy, complexity, and computational intensiveness. Ultimately, this research successfully created and operationalized an accurate service territory model and illuminated the challenges associated with utilizing telematics data for demand modeling.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>The Monkey Cheeks Toolkit: Design Strategies for Mitigating Flood Impacts in the Bangkok Metropolitan Area</title>
<link href="https://hdl.handle.net/1721.1/151623" rel="alternate"/>
<author>
<name>Rattanathumawat, Pimpakarn</name>
</author>
<id>https://hdl.handle.net/1721.1/151623</id>
<updated>2023-08-01T04:05:22Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">The Monkey Cheeks Toolkit: Design Strategies for Mitigating Flood Impacts in the Bangkok Metropolitan Area
Rattanathumawat, Pimpakarn
Bangkok, the capital of Thailand, has been facing frequent and destructive floods due to the recent decades of urban expansion and inadequate public drainage infrastructure. Although the Bangkok Metropolitan Administration (BMA) has actively improved the flood drainage network as the city expanded, its developed capacity and configuration have not kept pace with its population growth and rapid urbanization. Additionally, due to the escalating impact of climate change, Bangkok is expected to face more severe flooding, as well as the potential for greater water supply challenges, over the course of this century. &#13;
&#13;
Rather than solely depending on flood protection via large-scale infrastructure, this thesis proposes a decentralized approach to stormwater management, in which rain is captured where it falls through a local flood control measure called “Monkey Cheeks.” Although this measure concept is commonly utilized in large water retention areas, the thesis applies this retention system to an ultra-urban environment such as the Bangkok Metropolitan Area, where the availability of land is limited. The main objective is to embrace water as a valuable resource and seize the opportunity to incorporate it into the fabric of the city. The outcome of this research is presented in the form of a Design Toolkit, a set of strategies for implementing Monkey Cheeks across various scales of urban conditions, ranging from small individual property-level to large-scale publicly owned spaces. The Toolkit concludes with case studies illustrating how these strategies can be applied to existing conditions of Bangkok’s urban fabric, and how they can be combined to alleviate flooding throughout the city at large. Together, a network of Monkey Cheeks within the city can play a critical role in mitigating flood risk by slowing down runoff that could otherwise overwhelm public sewage systems, storing rainwater to tackle water supply challenges, and restoring the hydrologic function of the urban landscape by releasing water back to the aquifer. As such, the research contributes to the advancement of sustainable urban water management practices and highlights the importance of integrating traditional knowledge with modern urban areas.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Making Hands: Neural Implicit Manifold Learning of Hand Gestures</title>
<link href="https://hdl.handle.net/1721.1/151622" rel="alternate"/>
<author>
<name>Chatzinikolis, Dimitrios</name>
</author>
<id>https://hdl.handle.net/1721.1/151622</id>
<updated>2023-08-01T03:37:33Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Making Hands: Neural Implicit Manifold Learning of Hand Gestures
Chatzinikolis, Dimitrios
The human hand is a complex and sophisticated biological machine. Hand gesturing is key to our understanding of and interacting with the world around us. Hand gesturing is also instrumental in design and making. I argue that understanding the geometry of the space of hand gestures leads to an intuitive human-computer interaction. I decompose the gesture into its constituent parts, i.e., the hand motion – global coordinate system – and the hand pose – local coordinate system. I propose modeling the configuration space of hands as a high-dimensional manifold via neural unsigned distance fields, and I define plausible hand poses as points on the manifold. Next, I apply a distance metric to their configuration space. A trajectory in that space is a finite or infinite sequence of hand poses. These trajectories represent the different ways that the hand gestures. To demonstrate my approach, I restrict my study to a dataset of hands grasping everyday objects, and I evaluate my model on unknown grasps. Extending the model, the learned manifold acts as a prior for hand pose denoising, hand pose interpolation, and hand pose synthesis. Constraining that space can be interpreted as excluding impossible hand poses while constraining the manifold can be interpreted as defining a set of desirable hand poses. The former emphasizes the importance of bridging deep learning with existing mathematical structures, while the latter underlines future directions for the fields of design and computational making.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Liquid Metal Printing</title>
<link href="https://hdl.handle.net/1721.1/151621" rel="alternate"/>
<author>
<name>Karsan, Zain</name>
</author>
<id>https://hdl.handle.net/1721.1/151621</id>
<updated>2023-08-01T03:24:09Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Liquid Metal Printing
Karsan, Zain
The pace of worldwide material production and its deleterious effect on the climate motivate the need for materially efficient and sustainable methods of manufacture. Additive manufacturing (AM), commonly referred to as 3D Printing, presents one approach to sustainable manufacturing, affording complexity at high resolution with minimal scrap. For example, polymer, ceramic, and metal materials have been employed in AM to produce parts across industries as varied as aerospace to construction.&#13;
&#13;
Nevertheless, metal AM remains a high-cost process with slow process rates and build environments that are challenging to scale up, impeding the application of these manufacturing techniques but for products for which the cost per volume is significant. Liquid Metal Printing (LMP) is a novel approach to AM that is fast, scalable, and low cost, invented by the Self-Assembly Lab at MIT in 2020. However, this technique is nascent, and has only been developed to print with low melting point alloys that are unsuitable for any realistic use. Notwithstanding, LMP offers a new way of thinking about additive manufacturing by printing large scale, low resolution parts extremely quickly.&#13;
&#13;
Therefore, this thesis explores the redesign of several of the LMP components to print aluminum, describes a set of design rules and toolpath strategies for printing 2.5D multi-layer structures, and proposes several theoretical models for characterizing the print output. Finally, through a selection of case studies, this thesis assesses the applicability of LMP as a rapid coarse resolution additive manufacturing process in mechanical and product design.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Algorithm and Hardware Co-optimization for Image Segmentation in Wearable Ultrasound Devices: Continuous Bladder Monitoring</title>
<link href="https://hdl.handle.net/1721.1/151618" rel="alternate"/>
<author>
<name>Song, Zhiye</name>
</author>
<id>https://hdl.handle.net/1721.1/151618</id>
<updated>2023-08-01T03:57:08Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Algorithm and Hardware Co-optimization for Image Segmentation in Wearable Ultrasound Devices: Continuous Bladder Monitoring
Song, Zhiye
From monitoring muscle during exercise and training, to assess cardiovascular diseases, to estimate bladder volume, continuous autonomous tissue-monitoring is essential. Recent development in wearable ultrasound patches provide the foundation of wearable ultrasound devices with on-device image processing. Collaborating with Massachusetts General Hospital, we established bladder volume monitoring as the example use case. Real-time bladder monitoring can facilitate the diagnosis of post-operative urinary retention, and reduce indwelling urinary catheter usage and the risk of catheter-associated urinary tract infection. Using machine learning and hardware co-design, this thesis developed and validated a low-compute memory-efficient deep learning model and an energy-efficient all-parameters-on-chip application-specific integrated circuit (ASIC) for accurate bladder region segmentation and urine volume calculation.&#13;
&#13;
U-Net is the state-of-the artneural network (NN) for biomedical image segmentation [1]. We trained two binarized models with 4-bits and 6-bits skip connections. They achieved an accuracy within 3.8% and 2.6% of the floating-point U-Net without any floating-point operations, and reduced memory requirement 11.5× and 9.0×, respectively, to under 150 kB. This thesis also designed the first neural network accelerator targeting U-Net-like image segmentation. Using interleaving feature map representation, skip connection compression, and extensive design space exploration, the accelerator does not require external memory or any co-processor, and consumes only 14.4μJ per 128 × 128 image segmentaiton.&#13;
&#13;
The lightweight bladder volume estimation algorithm together with the energy-efficient image segmentation ASIC can be integrated with existing ultrasound probes to reduce the burdens of nurses in hospital settings and improve outpatient care. Moreover, the quantization and compression techniques and the image segmentation accelerator can be applied to other clinical applications, such as monitoring fetal heart rate and neural therapy. This technology, together with advances in compact ultrasound patches, will enable real-time tissue monitoring on the edge, thereby not only maintaining health data privacy, but also improving both point-of-care and inpatient healthcare.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Probing Language Models for Contextual ScaleUnderstanding</title>
<link href="https://hdl.handle.net/1721.1/151617" rel="alternate"/>
<author>
<name>Vedantam, Saaketh</name>
</author>
<id>https://hdl.handle.net/1721.1/151617</id>
<updated>2023-08-01T04:01:46Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Probing Language Models for Contextual ScaleUnderstanding
Vedantam, Saaketh
Pretrained language models (LMs) have demonstrated a remarkable ability to emit linguistic and factual knowledge in certain fields. Additionally, they seem to encode relational information about different concepts in a knowledge base. However, since they are trained solely on textual corpora, it is unclear whether these models implicitly understand anything grounded about the real world. This work investigates the extent to which LMs learn the structure of the physical world. By probing the contextualized embeddings of sentences, we examine how well LMs predict the sizes of real-world objects. We further explore the effect of adjectival modifiers on object embeddings. We show that while larger models more accurately convey scalar information through their embeddings, they perform on par with smaller models in the task of contextual prediction. Fortunately, the models are capable of identifying a difference in scale when an adjectival modifier is introduced, implying that the relevant context is successfully incorporated into the object’s embedding through the LM’s attention mechanism.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Non-Invasive Vision-Based Measurement of Hand&#13;
Kinematics and Interaction</title>
<link href="https://hdl.handle.net/1721.1/151615" rel="alternate"/>
<author>
<name>Wang, Margaret</name>
</author>
<id>https://hdl.handle.net/1721.1/151615</id>
<updated>2023-08-01T03:46:39Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Non-Invasive Vision-Based Measurement of Hand&#13;
Kinematics and Interaction
Wang, Margaret
The ability to manipulate and interact with the world is a key part of what distinguishes humanity from other animals, and the human hand is perhaps the most powerful tool we have to do so. As such, a great deal of research goes towards better understanding how the hand behaves during interaction tasks. Study of physical interaction requires measurement of hand kinematics and interaction forces. Unfortunately, current methods involve cumbersome sensors or external forces that inherently change the way that the subject behaves. In order to avoid these confounding factors, this thesis presents an approach to measuring hand kinematics, dynamics, and physical interaction using a non-encumbering vision-based tool.&#13;
&#13;
The proposed tool consists of (1) Vision-based hand tracking of hand kinematics in joint space (2) Synergy extraction and synergy space projection (3) Visual soft tissue deformation based contact detection (4) An exploration of force estimation at the fingertips.&#13;
&#13;
This pipeline is applied to a piano-based experiment for validation and comparison with existing tools. The results indicate that vision-based kinematics measurement is largely comparable to and at times shows more sensitivity to joint angle variation than traditionally instrumented approaches. However, force estimation is not yet a consistent alternative to physical sensor interfaces.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Automated Pipelines for Information Extraction from Semi-Structured Documents in Structured Format</title>
<link href="https://hdl.handle.net/1721.1/151614" rel="alternate"/>
<author>
<name>Chu, Jung Soo</name>
</author>
<id>https://hdl.handle.net/1721.1/151614</id>
<updated>2023-08-01T03:32:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Automated Pipelines for Information Extraction from Semi-Structured Documents in Structured Format
Chu, Jung Soo
As documents are one of the main tools for storing and communicating information, there have been a large amount of eff orts towards developing methods to parse information from them automatically. While many parts of this industry are automated, there are still scenarios where certain types of documents cannot be read by machine with high accuracy and throughput. It becomes especially more difficult when the documents are semi-structured, or in other words have widely varying formats. With the significant leaps in optical character recognition, computer vision, and natural language processing, there have been great progress towards this problem. In this paper, we propose two pipeline designs that utilize these newer techniques to extract information from semi-structured documents in a structured output format. The two pipelines are the fully automated pipeline and semi automated pipeline. The fully automated pipeline has a region detection module that finds the location of text blocks and table blocks regardless of the format of the document and a region extraction module that extracts information from each of the text and table blocks. The semi automated pipeline on the other hand has a classification module and an extraction module. The classification module determines the format class of the input document, while the extraction module has templates that can parse information from the documents in each format class. We evaluate the two pipelines in four key metrics: accuracy, coverage, time efficiency, and scalability. The fully automated pipeline shows a strong result in coverage and scalability, while the semi automated pipeline succeeds in accuracy and time efficiency.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Learning Refinement Cost Estimators for Bilevel&#13;
Planning</title>
<link href="https://hdl.handle.net/1721.1/151612" rel="alternate"/>
<author>
<name>Luong, Lilian</name>
</author>
<id>https://hdl.handle.net/1721.1/151612</id>
<updated>2023-08-01T04:01:37Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Learning Refinement Cost Estimators for Bilevel&#13;
Planning
Luong, Lilian
Bilevel planning is an effective approach for solving complex task and motion planning (TAMP) problems with continuous state and action spaces, that involves first searching for a high-level abstract plan and then refining it into a sequence of lowlevel actions. Although the low-level refinement process is a significant contributor to the total time needed to solve a task, this cost is typically unaccounted for during high-level planning. This can result in undesirable behavior if abstract plans that are difficult or even impossible to refine are selected over alternatives that may be slightly longer but can also be refined significantly faster. This work develops a method for learning to estimate the cost of refining an abstract plan and a framework for using the estimator to guide high-level search in a bilevel planner. We demonstrate using two environments that our proposed approach considerably improves on the combined planning and execution cost required for tasks compared to several baselines, including a standard benchmark bilevel planner and alternative estimator models.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>Novel Method for Mach-Zehnder Interferometer&#13;
Phase Stabilization</title>
<link href="https://hdl.handle.net/1721.1/151611" rel="alternate"/>
<author>
<name>Hardy, Max</name>
</author>
<id>https://hdl.handle.net/1721.1/151611</id>
<updated>2023-08-01T03:27:35Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">Novel Method for Mach-Zehnder Interferometer&#13;
Phase Stabilization
Hardy, Max
The Mach-Zehnder Interferometer (MZI) is a device used across many areas of research including quantum computing, optical communication, sensing, and imaging. The proper function of an MZI depends upon the stabilization of the relative phase between its arms as thermal gradients and vibrations can cause this phase to drift. This thesis proposes a novel method for MZI phase stabilization. The stabilization method was modelled mathematically and simulated, and a low cost and simple prototype control circuit was constructed which successfully proved the feasibility of this stabilization method. Finally, the initial mathematical model was refined according to experimental observations. This novel stabilization method could be impactful to any field or application which depends upon the use of MZIs.
</summary>
<dc:date>2023-06-01T00:00:00Z</dc:date>
</entry>
<entry>
<title>A Measurement Tool for Videoconferencing User&#13;
Experience</title>
<link href="https://hdl.handle.net/1721.1/151610" rel="alternate"/>
<author>
<name>Jin, Caroline</name>
</author>
<id>https://hdl.handle.net/1721.1/151610</id>
<updated>2023-08-01T04:06:59Z</updated>
<published>2023-06-01T00:00:00Z</published>
<summary type="text">A Measurement Tool for Videoconferencing User&#13;
Experience
Jin, Caroline
The COVID-19 pandemic forced people to work remotely and use videoconferencing software like Zoom in their daily lives. While people are returning to their prepandemic lifestyle, many still depend on videoconferencing software. As a result, application developers need to regularly monitor user experience in terms of video quality, stalls, and network conditions, and identify areas of potential improvement. Companies and academic researchers focus user experience analysis on dual-endpoint, controlled conditions that do not reflect everyday user calls. Gathering data on a large scale without knowing the network structure and getting permission for traffic analysis takes time and effort. Such large-scale experiments often use lengthy procedures to obtain the right permissions and deploy monitoring infrastructure in the middle of the campus network.&#13;
&#13;
In contrast to existing approaches, an ideal measurement application would merely run on users’ devices without cooperation from the other endpoint that they’re conversing with. Such an application enables researchers to collect network statistics across a wide range of Internet conditions at a fine-grained level without significant overheads. This thesis proposes the Single Endpoint Zoom Measurement Application (SEZMA) that computes and logs network and video metrics when a user is on a Zoom call and sends metric logs to a centralized server. In addition to providing insig